• Privacy Policy

Research Method

Home » Content Analysis – Methods, Types and Examples

Content Analysis – Methods, Types and Examples

Table of Contents

Content Analysis

Content Analysis

Definition:

Content analysis is a research method used to analyze and interpret the characteristics of various forms of communication, such as text, images, or audio. It involves systematically analyzing the content of these materials, identifying patterns, themes, and other relevant features, and drawing inferences or conclusions based on the findings.

Content analysis can be used to study a wide range of topics, including media coverage of social issues, political speeches, advertising messages, and online discussions, among others. It is often used in qualitative research and can be combined with other methods to provide a more comprehensive understanding of a particular phenomenon.

Types of Content Analysis

There are generally two types of content analysis:

Quantitative Content Analysis

This type of content analysis involves the systematic and objective counting and categorization of the content of a particular form of communication, such as text or video. The data obtained is then subjected to statistical analysis to identify patterns, trends, and relationships between different variables. Quantitative content analysis is often used to study media content, advertising, and political speeches.

Qualitative Content Analysis

This type of content analysis is concerned with the interpretation and understanding of the meaning and context of the content. It involves the systematic analysis of the content to identify themes, patterns, and other relevant features, and to interpret the underlying meanings and implications of these features. Qualitative content analysis is often used to study interviews, focus groups, and other forms of qualitative data, where the researcher is interested in understanding the subjective experiences and perceptions of the participants.

Methods of Content Analysis

There are several methods of content analysis, including:

Conceptual Analysis

This method involves analyzing the meanings of key concepts used in the content being analyzed. The researcher identifies key concepts and analyzes how they are used, defining them and categorizing them into broader themes.

Content Analysis by Frequency

This method involves counting and categorizing the frequency of specific words, phrases, or themes that appear in the content being analyzed. The researcher identifies relevant keywords or phrases and systematically counts their frequency.

Comparative Analysis

This method involves comparing the content of two or more sources to identify similarities, differences, and patterns. The researcher selects relevant sources, identifies key themes or concepts, and compares how they are represented in each source.

Discourse Analysis

This method involves analyzing the structure and language of the content being analyzed to identify how the content constructs and represents social reality. The researcher analyzes the language used and the underlying assumptions, beliefs, and values reflected in the content.

Narrative Analysis

This method involves analyzing the content as a narrative, identifying the plot, characters, and themes, and analyzing how they relate to the broader social context. The researcher identifies the underlying messages conveyed by the narrative and their implications for the broader social context.

Content Analysis Conducting Guide

Here is a basic guide to conducting a content analysis:

  • Define your research question or objective: Before starting your content analysis, you need to define your research question or objective clearly. This will help you to identify the content you need to analyze and the type of analysis you need to conduct.
  • Select your sample: Select a representative sample of the content you want to analyze. This may involve selecting a random sample, a purposive sample, or a convenience sample, depending on the research question and the availability of the content.
  • Develop a coding scheme: Develop a coding scheme or a set of categories to use for coding the content. The coding scheme should be based on your research question or objective and should be reliable, valid, and comprehensive.
  • Train coders: Train coders to use the coding scheme and ensure that they have a clear understanding of the coding categories and procedures. You may also need to establish inter-coder reliability to ensure that different coders are coding the content consistently.
  • Code the content: Code the content using the coding scheme. This may involve manually coding the content, using software, or a combination of both.
  • Analyze the data: Once the content is coded, analyze the data using appropriate statistical or qualitative methods, depending on the research question and the type of data.
  • Interpret the results: Interpret the results of the analysis in the context of your research question or objective. Draw conclusions based on the findings and relate them to the broader literature on the topic.
  • Report your findings: Report your findings in a clear and concise manner, including the research question, methodology, results, and conclusions. Provide details about the coding scheme, inter-coder reliability, and any limitations of the study.

Applications of Content Analysis

Content analysis has numerous applications across different fields, including:

  • Media Research: Content analysis is commonly used in media research to examine the representation of different groups, such as race, gender, and sexual orientation, in media content. It can also be used to study media framing, media bias, and media effects.
  • Political Communication : Content analysis can be used to study political communication, including political speeches, debates, and news coverage of political events. It can also be used to study political advertising and the impact of political communication on public opinion and voting behavior.
  • Marketing Research: Content analysis can be used to study advertising messages, consumer reviews, and social media posts related to products or services. It can provide insights into consumer preferences, attitudes, and behaviors.
  • Health Communication: Content analysis can be used to study health communication, including the representation of health issues in the media, the effectiveness of health campaigns, and the impact of health messages on behavior.
  • Education Research : Content analysis can be used to study educational materials, including textbooks, curricula, and instructional materials. It can provide insights into the representation of different topics, perspectives, and values.
  • Social Science Research: Content analysis can be used in a wide range of social science research, including studies of social media, online communities, and other forms of digital communication. It can also be used to study interviews, focus groups, and other qualitative data sources.

Examples of Content Analysis

Here are some examples of content analysis:

  • Media Representation of Race and Gender: A content analysis could be conducted to examine the representation of different races and genders in popular media, such as movies, TV shows, and news coverage.
  • Political Campaign Ads : A content analysis could be conducted to study political campaign ads and the themes and messages used by candidates.
  • Social Media Posts: A content analysis could be conducted to study social media posts related to a particular topic, such as the COVID-19 pandemic, to examine the attitudes and beliefs of social media users.
  • Instructional Materials: A content analysis could be conducted to study the representation of different topics and perspectives in educational materials, such as textbooks and curricula.
  • Product Reviews: A content analysis could be conducted to study product reviews on e-commerce websites, such as Amazon, to identify common themes and issues mentioned by consumers.
  • News Coverage of Health Issues: A content analysis could be conducted to study news coverage of health issues, such as vaccine hesitancy, to identify common themes and perspectives.
  • Online Communities: A content analysis could be conducted to study online communities, such as discussion forums or social media groups, to understand the language, attitudes, and beliefs of the community members.

Purpose of Content Analysis

The purpose of content analysis is to systematically analyze and interpret the content of various forms of communication, such as written, oral, or visual, to identify patterns, themes, and meanings. Content analysis is used to study communication in a wide range of fields, including media studies, political science, psychology, education, sociology, and marketing research. The primary goals of content analysis include:

  • Describing and summarizing communication: Content analysis can be used to describe and summarize the content of communication, such as the themes, topics, and messages conveyed in media content, political speeches, or social media posts.
  • Identifying patterns and trends: Content analysis can be used to identify patterns and trends in communication, such as changes over time, differences between groups, or common themes or motifs.
  • Exploring meanings and interpretations: Content analysis can be used to explore the meanings and interpretations of communication, such as the underlying values, beliefs, and assumptions that shape the content.
  • Testing hypotheses and theories : Content analysis can be used to test hypotheses and theories about communication, such as the effects of media on attitudes and behaviors or the framing of political issues in the media.

When to use Content Analysis

Content analysis is a useful method when you want to analyze and interpret the content of various forms of communication, such as written, oral, or visual. Here are some specific situations where content analysis might be appropriate:

  • When you want to study media content: Content analysis is commonly used in media studies to analyze the content of TV shows, movies, news coverage, and other forms of media.
  • When you want to study political communication : Content analysis can be used to study political speeches, debates, news coverage, and advertising.
  • When you want to study consumer attitudes and behaviors: Content analysis can be used to analyze product reviews, social media posts, and other forms of consumer feedback.
  • When you want to study educational materials : Content analysis can be used to analyze textbooks, instructional materials, and curricula.
  • When you want to study online communities: Content analysis can be used to analyze discussion forums, social media groups, and other forms of online communication.
  • When you want to test hypotheses and theories : Content analysis can be used to test hypotheses and theories about communication, such as the framing of political issues in the media or the effects of media on attitudes and behaviors.

Characteristics of Content Analysis

Content analysis has several key characteristics that make it a useful research method. These include:

  • Objectivity : Content analysis aims to be an objective method of research, meaning that the researcher does not introduce their own biases or interpretations into the analysis. This is achieved by using standardized and systematic coding procedures.
  • Systematic: Content analysis involves the use of a systematic approach to analyze and interpret the content of communication. This involves defining the research question, selecting the sample of content to analyze, developing a coding scheme, and analyzing the data.
  • Quantitative : Content analysis often involves counting and measuring the occurrence of specific themes or topics in the content, making it a quantitative research method. This allows for statistical analysis and generalization of findings.
  • Contextual : Content analysis considers the context in which the communication takes place, such as the time period, the audience, and the purpose of the communication.
  • Iterative : Content analysis is an iterative process, meaning that the researcher may refine the coding scheme and analysis as they analyze the data, to ensure that the findings are valid and reliable.
  • Reliability and validity : Content analysis aims to be a reliable and valid method of research, meaning that the findings are consistent and accurate. This is achieved through inter-coder reliability tests and other measures to ensure the quality of the data and analysis.

Advantages of Content Analysis

There are several advantages to using content analysis as a research method, including:

  • Objective and systematic : Content analysis aims to be an objective and systematic method of research, which reduces the likelihood of bias and subjectivity in the analysis.
  • Large sample size: Content analysis allows for the analysis of a large sample of data, which increases the statistical power of the analysis and the generalizability of the findings.
  • Non-intrusive: Content analysis does not require the researcher to interact with the participants or disrupt their natural behavior, making it a non-intrusive research method.
  • Accessible data: Content analysis can be used to analyze a wide range of data types, including written, oral, and visual communication, making it accessible to researchers across different fields.
  • Versatile : Content analysis can be used to study communication in a wide range of contexts and fields, including media studies, political science, psychology, education, sociology, and marketing research.
  • Cost-effective: Content analysis is a cost-effective research method, as it does not require expensive equipment or participant incentives.

Limitations of Content Analysis

While content analysis has many advantages, there are also some limitations to consider, including:

  • Limited contextual information: Content analysis is focused on the content of communication, which means that contextual information may be limited. This can make it difficult to fully understand the meaning behind the communication.
  • Limited ability to capture nonverbal communication : Content analysis is limited to analyzing the content of communication that can be captured in written or recorded form. It may miss out on nonverbal communication, such as body language or tone of voice.
  • Subjectivity in coding: While content analysis aims to be objective, there may be subjectivity in the coding process. Different coders may interpret the content differently, which can lead to inconsistent results.
  • Limited ability to establish causality: Content analysis is a correlational research method, meaning that it cannot establish causality between variables. It can only identify associations between variables.
  • Limited generalizability: Content analysis is limited to the data that is analyzed, which means that the findings may not be generalizable to other contexts or populations.
  • Time-consuming: Content analysis can be a time-consuming research method, especially when analyzing a large sample of data. This can be a disadvantage for researchers who need to complete their research in a short amount of time.

About the author

' src=

Muhammad Hassan

Researcher, Academic Writer, Web developer

You may also like

Cluster Analysis

Cluster Analysis – Types, Methods and Examples

Discriminant Analysis

Discriminant Analysis – Methods, Types and...

MANOVA

MANOVA (Multivariate Analysis of Variance) –...

Documentary Analysis

Documentary Analysis – Methods, Applications and...

ANOVA

ANOVA (Analysis of variance) – Formulas, Types...

Graphical Methods

Graphical Methods – Types, Examples and Guide

Skip to content

Read the latest news stories about Mailman faculty, research, and events. 

Departments

We integrate an innovative skills-based curriculum, research collaborations, and hands-on field experience to prepare students.

Learn more about our research centers, which focus on critical issues in public health.

Our Faculty

Meet the faculty of the Mailman School of Public Health. 

Become a Student

Life and community, how to apply.

Learn how to apply to the Mailman School of Public Health. 

Content Analysis

Content analysis is a research tool used to determine the presence of certain words, themes, or concepts within some given qualitative data (i.e. text). Using content analysis, researchers can quantify and analyze the presence, meanings, and relationships of such certain words, themes, or concepts. As an example, researchers can evaluate language used within a news article to search for bias or partiality. Researchers can then make inferences about the messages within the texts, the writer(s), the audience, and even the culture and time of surrounding the text.

Description

Sources of data could be from interviews, open-ended questions, field research notes, conversations, or literally any occurrence of communicative language (such as books, essays, discussions, newspaper headlines, speeches, media, historical documents). A single study may analyze various forms of text in its analysis. To analyze the text using content analysis, the text must be coded, or broken down, into manageable code categories for analysis (i.e. “codes”). Once the text is coded into code categories, the codes can then be further categorized into “code categories” to summarize data even further.

Three different definitions of content analysis are provided below.

Definition 1: “Any technique for making inferences by systematically and objectively identifying special characteristics of messages.” (from Holsti, 1968)

Definition 2: “An interpretive and naturalistic approach. It is both observational and narrative in nature and relies less on the experimental elements normally associated with scientific research (reliability, validity, and generalizability) (from Ethnography, Observational Research, and Narrative Inquiry, 1994-2012).

Definition 3: “A research technique for the objective, systematic and quantitative description of the manifest content of communication.” (from Berelson, 1952)

Uses of Content Analysis

Identify the intentions, focus or communication trends of an individual, group or institution

Describe attitudinal and behavioral responses to communications

Determine the psychological or emotional state of persons or groups

Reveal international differences in communication content

Reveal patterns in communication content

Pre-test and improve an intervention or survey prior to launch

Analyze focus group interviews and open-ended questions to complement quantitative data

Types of Content Analysis

There are two general types of content analysis: conceptual analysis and relational analysis. Conceptual analysis determines the existence and frequency of concepts in a text. Relational analysis develops the conceptual analysis further by examining the relationships among concepts in a text. Each type of analysis may lead to different results, conclusions, interpretations and meanings.

Conceptual Analysis

Typically people think of conceptual analysis when they think of content analysis. In conceptual analysis, a concept is chosen for examination and the analysis involves quantifying and counting its presence. The main goal is to examine the occurrence of selected terms in the data. Terms may be explicit or implicit. Explicit terms are easy to identify. Coding of implicit terms is more complicated: you need to decide the level of implication and base judgments on subjectivity (an issue for reliability and validity). Therefore, coding of implicit terms involves using a dictionary or contextual translation rules or both.

To begin a conceptual content analysis, first identify the research question and choose a sample or samples for analysis. Next, the text must be coded into manageable content categories. This is basically a process of selective reduction. By reducing the text to categories, the researcher can focus on and code for specific words or patterns that inform the research question.

General steps for conducting a conceptual content analysis:

1. Decide the level of analysis: word, word sense, phrase, sentence, themes

2. Decide how many concepts to code for: develop a pre-defined or interactive set of categories or concepts. Decide either: A. to allow flexibility to add categories through the coding process, or B. to stick with the pre-defined set of categories.

Option A allows for the introduction and analysis of new and important material that could have significant implications to one’s research question.

Option B allows the researcher to stay focused and examine the data for specific concepts.

3. Decide whether to code for existence or frequency of a concept. The decision changes the coding process.

When coding for the existence of a concept, the researcher would count a concept only once if it appeared at least once in the data and no matter how many times it appeared.

When coding for the frequency of a concept, the researcher would count the number of times a concept appears in a text.

4. Decide on how you will distinguish among concepts:

Should text be coded exactly as they appear or coded as the same when they appear in different forms? For example, “dangerous” vs. “dangerousness”. The point here is to create coding rules so that these word segments are transparently categorized in a logical fashion. The rules could make all of these word segments fall into the same category, or perhaps the rules can be formulated so that the researcher can distinguish these word segments into separate codes.

What level of implication is to be allowed? Words that imply the concept or words that explicitly state the concept? For example, “dangerous” vs. “the person is scary” vs. “that person could cause harm to me”. These word segments may not merit separate categories, due the implicit meaning of “dangerous”.

5. Develop rules for coding your texts. After decisions of steps 1-4 are complete, a researcher can begin developing rules for translation of text into codes. This will keep the coding process organized and consistent. The researcher can code for exactly what he/she wants to code. Validity of the coding process is ensured when the researcher is consistent and coherent in their codes, meaning that they follow their translation rules. In content analysis, obeying by the translation rules is equivalent to validity.

6. Decide what to do with irrelevant information: should this be ignored (e.g. common English words like “the” and “and”), or used to reexamine the coding scheme in the case that it would add to the outcome of coding?

7. Code the text: This can be done by hand or by using software. By using software, researchers can input categories and have coding done automatically, quickly and efficiently, by the software program. When coding is done by hand, a researcher can recognize errors far more easily (e.g. typos, misspelling). If using computer coding, text could be cleaned of errors to include all available data. This decision of hand vs. computer coding is most relevant for implicit information where category preparation is essential for accurate coding.

8. Analyze your results: Draw conclusions and generalizations where possible. Determine what to do with irrelevant, unwanted, or unused text: reexamine, ignore, or reassess the coding scheme. Interpret results carefully as conceptual content analysis can only quantify the information. Typically, general trends and patterns can be identified.

Relational Analysis

Relational analysis begins like conceptual analysis, where a concept is chosen for examination. However, the analysis involves exploring the relationships between concepts. Individual concepts are viewed as having no inherent meaning and rather the meaning is a product of the relationships among concepts.

To begin a relational content analysis, first identify a research question and choose a sample or samples for analysis. The research question must be focused so the concept types are not open to interpretation and can be summarized. Next, select text for analysis. Select text for analysis carefully by balancing having enough information for a thorough analysis so results are not limited with having information that is too extensive so that the coding process becomes too arduous and heavy to supply meaningful and worthwhile results.

There are three subcategories of relational analysis to choose from prior to going on to the general steps.

Affect extraction: an emotional evaluation of concepts explicit in a text. A challenge to this method is that emotions can vary across time, populations, and space. However, it could be effective at capturing the emotional and psychological state of the speaker or writer of the text.

Proximity analysis: an evaluation of the co-occurrence of explicit concepts in the text. Text is defined as a string of words called a “window” that is scanned for the co-occurrence of concepts. The result is the creation of a “concept matrix”, or a group of interrelated co-occurring concepts that would suggest an overall meaning.

Cognitive mapping: a visualization technique for either affect extraction or proximity analysis. Cognitive mapping attempts to create a model of the overall meaning of the text such as a graphic map that represents the relationships between concepts.

General steps for conducting a relational content analysis:

1. Determine the type of analysis: Once the sample has been selected, the researcher needs to determine what types of relationships to examine and the level of analysis: word, word sense, phrase, sentence, themes. 2. Reduce the text to categories and code for words or patterns. A researcher can code for existence of meanings or words. 3. Explore the relationship between concepts: once the words are coded, the text can be analyzed for the following:

Strength of relationship: degree to which two or more concepts are related.

Sign of relationship: are concepts positively or negatively related to each other?

Direction of relationship: the types of relationship that categories exhibit. For example, “X implies Y” or “X occurs before Y” or “if X then Y” or if X is the primary motivator of Y.

4. Code the relationships: a difference between conceptual and relational analysis is that the statements or relationships between concepts are coded. 5. Perform statistical analyses: explore differences or look for relationships among the identified variables during coding. 6. Map out representations: such as decision mapping and mental models.

Reliability and Validity

Reliability : Because of the human nature of researchers, coding errors can never be eliminated but only minimized. Generally, 80% is an acceptable margin for reliability. Three criteria comprise the reliability of a content analysis:

Stability: the tendency for coders to consistently re-code the same data in the same way over a period of time.

Reproducibility: tendency for a group of coders to classify categories membership in the same way.

Accuracy: extent to which the classification of text corresponds to a standard or norm statistically.

Validity : Three criteria comprise the validity of a content analysis:

Closeness of categories: this can be achieved by utilizing multiple classifiers to arrive at an agreed upon definition of each specific category. Using multiple classifiers, a concept category that may be an explicit variable can be broadened to include synonyms or implicit variables.

Conclusions: What level of implication is allowable? Do conclusions correctly follow the data? Are results explainable by other phenomena? This becomes especially problematic when using computer software for analysis and distinguishing between synonyms. For example, the word “mine,” variously denotes a personal pronoun, an explosive device, and a deep hole in the ground from which ore is extracted. Software can obtain an accurate count of that word’s occurrence and frequency, but not be able to produce an accurate accounting of the meaning inherent in each particular usage. This problem could throw off one’s results and make any conclusion invalid.

Generalizability of the results to a theory: dependent on the clear definitions of concept categories, how they are determined and how reliable they are at measuring the idea one is seeking to measure. Generalizability parallels reliability as much of it depends on the three criteria for reliability.

Advantages of Content Analysis

Directly examines communication using text

Allows for both qualitative and quantitative analysis

Provides valuable historical and cultural insights over time

Allows a closeness to data

Coded form of the text can be statistically analyzed

Unobtrusive means of analyzing interactions

Provides insight into complex models of human thought and language use

When done well, is considered a relatively “exact” research method

Content analysis is a readily-understood and an inexpensive research method

A more powerful tool when combined with other research methods such as interviews, observation, and use of archival records. It is very useful for analyzing historical material, especially for documenting trends over time.

Disadvantages of Content Analysis

Can be extremely time consuming

Is subject to increased error, particularly when relational analysis is used to attain a higher level of interpretation

Is often devoid of theoretical base, or attempts too liberally to draw meaningful inferences about the relationships and impacts implied in a study

Is inherently reductive, particularly when dealing with complex texts

Tends too often to simply consist of word counts

Often disregards the context that produced the text, as well as the state of things after the text is produced

Can be difficult to automate or computerize

Textbooks & Chapters  

Berelson, Bernard. Content Analysis in Communication Research.New York: Free Press, 1952.

Busha, Charles H. and Stephen P. Harter. Research Methods in Librarianship: Techniques and Interpretation.New York: Academic Press, 1980.

de Sola Pool, Ithiel. Trends in Content Analysis. Urbana: University of Illinois Press, 1959.

Krippendorff, Klaus. Content Analysis: An Introduction to its Methodology. Beverly Hills: Sage Publications, 1980.

Fielding, NG & Lee, RM. Using Computers in Qualitative Research. SAGE Publications, 1991. (Refer to Chapter by Seidel, J. ‘Method and Madness in the Application of Computer Technology to Qualitative Data Analysis’.)

Methodological Articles  

Hsieh HF & Shannon SE. (2005). Three Approaches to Qualitative Content Analysis.Qualitative Health Research. 15(9): 1277-1288.

Elo S, Kaarianinen M, Kanste O, Polkki R, Utriainen K, & Kyngas H. (2014). Qualitative Content Analysis: A focus on trustworthiness. Sage Open. 4:1-10.

Application Articles  

Abroms LC, Padmanabhan N, Thaweethai L, & Phillips T. (2011). iPhone Apps for Smoking Cessation: A content analysis. American Journal of Preventive Medicine. 40(3):279-285.

Ullstrom S. Sachs MA, Hansson J, Ovretveit J, & Brommels M. (2014). Suffering in Silence: a qualitative study of second victims of adverse events. British Medical Journal, Quality & Safety Issue. 23:325-331.

Owen P. (2012).Portrayals of Schizophrenia by Entertainment Media: A Content Analysis of Contemporary Movies. Psychiatric Services. 63:655-659.

Choosing whether to conduct a content analysis by hand or by using computer software can be difficult. Refer to ‘Method and Madness in the Application of Computer Technology to Qualitative Data Analysis’ listed above in “Textbooks and Chapters” for a discussion of the issue.

QSR NVivo:  http://www.qsrinternational.com/products.aspx

Atlas.ti:  http://www.atlasti.com/webinars.html

R- RQDA package:  http://rqda.r-forge.r-project.org/

Rolly Constable, Marla Cowell, Sarita Zornek Crawford, David Golden, Jake Hartvigsen, Kathryn Morgan, Anne Mudgett, Kris Parrish, Laura Thomas, Erika Yolanda Thompson, Rosie Turner, and Mike Palmquist. (1994-2012). Ethnography, Observational Research, and Narrative Inquiry. Writing@CSU. Colorado State University. Available at: https://writing.colostate.edu/guides/guide.cfm?guideid=63 .

As an introduction to Content Analysis by Michael Palmquist, this is the main resource on Content Analysis on the Web. It is comprehensive, yet succinct. It includes examples and an annotated bibliography. The information contained in the narrative above draws heavily from and summarizes Michael Palmquist’s excellent resource on Content Analysis but was streamlined for the purpose of doctoral students and junior researchers in epidemiology.

At Columbia University Mailman School of Public Health, more detailed training is available through the Department of Sociomedical Sciences- P8785 Qualitative Research Methods.

Join the Conversation

Have a question about methods? Join us on Facebook

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, automatically generate references for free.

  • Knowledge Base
  • Methodology

Content Analysis | A Step-by-Step Guide with Examples

Published on 5 May 2022 by Amy Luo . Revised on 5 December 2022.

Content analysis is a research method used to identify patterns in recorded communication. To conduct content analysis, you systematically collect data from a set of texts, which can be written, oral, or visual:

  • Books, newspapers, and magazines
  • Speeches and interviews
  • Web content and social media posts
  • Photographs and films

Content analysis can be both quantitative (focused on counting and measuring) and qualitative (focused on interpreting and understanding). In both types, you categorise or ‘code’ words, themes, and concepts within the texts and then analyse the results.

Table of contents

What is content analysis used for, advantages of content analysis, disadvantages of content analysis, how to conduct content analysis.

Researchers use content analysis to find out about the purposes, messages, and effects of communication content. They can also make inferences about the producers and audience of the texts they analyse.

Content analysis can be used to quantify the occurrence of certain words, phrases, subjects, or concepts in a set of historical or contemporary texts.

In addition, content analysis can be used to make qualitative inferences by analysing the meaning and semantic relationship of words and concepts.

Because content analysis can be applied to a broad range of texts, it is used in a variety of fields, including marketing, media studies, anthropology, cognitive science, psychology, and many social science disciplines. It has various possible goals:

  • Finding correlations and patterns in how concepts are communicated
  • Understanding the intentions of an individual, group, or institution
  • Identifying propaganda and bias in communication
  • Revealing differences in communication in different contexts
  • Analysing the consequences of communication content, such as the flow of information or audience responses

Prevent plagiarism, run a free check.

  • Unobtrusive data collection

You can analyse communication and social interaction without the direct involvement of participants, so your presence as a researcher doesn’t influence the results.

  • Transparent and replicable

When done well, content analysis follows a systematic procedure that can easily be replicated by other researchers, yielding results with high reliability .

  • Highly flexible

You can conduct content analysis at any time, in any location, and at low cost. All you need is access to the appropriate sources.

Focusing on words or phrases in isolation can sometimes be overly reductive, disregarding context, nuance, and ambiguous meanings.

Content analysis almost always involves some level of subjective interpretation, which can affect the reliability and validity of the results and conclusions.

  • Time intensive

Manually coding large volumes of text is extremely time-consuming, and it can be difficult to automate effectively.

If you want to use content analysis in your research, you need to start with a clear, direct  research question .

Next, you follow these five steps.

Step 1: Select the content you will analyse

Based on your research question, choose the texts that you will analyse. You need to decide:

  • The medium (e.g., newspapers, speeches, or websites) and genre (e.g., opinion pieces, political campaign speeches, or marketing copy)
  • The criteria for inclusion (e.g., newspaper articles that mention a particular event, speeches by a certain politician, or websites selling a specific type of product)
  • The parameters in terms of date range, location, etc.

If there are only a small number of texts that meet your criteria, you might analyse all of them. If there is a large volume of texts, you can select a sample .

Step 2: Define the units and categories of analysis

Next, you need to determine the level at which you will analyse your chosen texts. This means defining:

  • The unit(s) of meaning that will be coded. For example, are you going to record the frequency of individual words and phrases, the characteristics of people who produced or appear in the texts, the presence and positioning of images, or the treatment of themes and concepts?
  • The set of categories that you will use for coding. Categories can be objective characteristics (e.g., aged 30–40, lawyer, parent) or more conceptual (e.g., trustworthy, corrupt, conservative, family-oriented).

Step 3: Develop a set of rules for coding

Coding involves organising the units of meaning into the previously defined categories. Especially with more conceptual categories, it’s important to clearly define the rules for what will and won’t be included to ensure that all texts are coded consistently.

Coding rules are especially important if multiple researchers are involved, but even if you’re coding all of the text by yourself, recording the rules makes your method more transparent and reliable.

Step 4: Code the text according to the rules

You go through each text and record all relevant data in the appropriate categories. This can be done manually or aided with computer programs, such as QSR NVivo , Atlas.ti , and Diction , which can help speed up the process of counting and categorising words and phrases.

Step 5: Analyse the results and draw conclusions

Once coding is complete, the collected data is examined to find patterns and draw conclusions in response to your research question. You might use statistical analysis to find correlations or trends, discuss your interpretations of what the results mean, and make inferences about the creators, context, and audience of the texts.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the ‘Cite this Scribbr article’ button to automatically add the citation to our free Reference Generator.

Luo, A. (2022, December 05). Content Analysis | A Step-by-Step Guide with Examples. Scribbr. Retrieved 22 April 2024, from https://www.scribbr.co.uk/research-methods/content-analysis-explained/

Is this article helpful?

Amy Luo

Other students also liked

How to do thematic analysis | guide & examples, data collection methods | step-by-step guide & examples, qualitative vs quantitative research | examples & methods.

  • What is content analysis?

Last updated

20 March 2023

Reviewed by

Miroslav Damyanov

When you're conducting qualitative research, you'll find yourself analyzing various texts. Perhaps you'll be evaluating transcripts from audio interviews you've conducted. Or you may find yourself assessing the results of a survey filled with open-ended questions.

Streamline content analysis

Bring all your qualitative research into one place to code and analyze with Dovetail

Content analysis is a research method used to identify the presence of various concepts, words, and themes in different texts. Two types of content analysis exist: conceptual analysis and relational analysis . In the former, researchers determine whether and how frequently certain concepts appear in a text. In relational analysis, researchers explore how different concepts are related to one another in a text. 

Both types of content analysis require the researcher to code the text. Coding the text means breaking it down into different categories that allow it to be analyzed more easily.

  • What are some common uses of content analysis?

You can use content analysis to analyze many forms of text, including:

Interview and discussion transcripts

Newspaper articles and headline

Literary works

Historical documents

Government reports

Academic papers

Music lyrics

Researchers commonly use content analysis to draw insights and conclusions from literary works. Historians and biographers may apply this approach to letters, papers, and other historical documents to gain insight into the historical figures and periods they are writing about. Market researchers can also use it to evaluate brand performance and perception.

Some researchers have used content analysis to explore differences in decision-making and other cognitive processes. While researchers traditionally used this approach to explore human cognition, content analysis is also at the heart of machine learning approaches currently being used and developed by software and AI companies.

  • Conducting a conceptual analysis

Conceptual analysis is more commonly associated with content analysis than relational analysis. 

In conceptual analysis, you're looking for the appearance and frequency of different concepts. Why? This information can help further your qualitative or quantitative analysis of a text. It's an inexpensive and easily understood research method that can help you draw inferences and conclusions about your research subject. And while it is a relatively straightforward analytical tool, it does consist of a multi-step process that you must closely follow to ensure the reliability and validity of your study.

When you're ready to conduct a conceptual analysis, refer to your research question and the text. Ask yourself what information likely found in the text is relevant to your question. You'll need to know this to determine how you'll code the text. Then follow these steps:

1. Determine whether you're looking for explicit terms or implicit terms.

Explicit terms are those that directly appear in the text, while implicit ones are those that the text implies or alludes to or that you can infer. 

Coding for explicit terms is straightforward. For example, if you're looking to code a text for an author's explicit use of color,  you'd simply code for every instance a color appears in the text. However, if you're coding for implicit terms, you'll need to determine and define how you're identifying the presence of the term first. Doing so involves a certain amount of subjectivity and may impinge upon the reliability and validity of your study .

2. Next, identify the level at which you'll conduct your analysis.

You can search for words, phrases, or sentences encapsulating your terms. You can also search for concepts and themes, but you'll need to define how you expect to identify them in the text. You must also define rules for how you'll code different terms to reduce ambiguity. For example, if, in an interview transcript, a person repeats a word one or more times in a row as a verbal tic, should you code it more than once? And what will you do with irrelevant data that appears in a term if you're coding for sentences? 

Defining these rules upfront can help make your content analysis more efficient and your final analysis more reliable and valid.

3. You'll need to determine whether you're coding for a concept or theme's existence or frequency.

If you're coding for its existence, you’ll only count it once, at its first appearance, no matter how many times it subsequently appears. If you're searching for frequency, you'll count the number of its appearances in the text.

4. You'll also want to determine the number of terms you want to code for and how you may wish to categorize them.

For example, say you're conducting a content analysis of customer service call transcripts and looking for evidence of customer dissatisfaction with a product or service. You might create categories that refer to different elements with which customers might be dissatisfied, such as price, features, packaging, technical support, and so on. Then you might look for sentences that refer to those product elements according to each category in a negative light.

5. Next, you'll need to develop translation rules for your codes.

Those rules should be clear and consistent, allowing you to keep track of your data in an organized fashion.

6. After you've determined the terms for which you're searching, your categories, and translation rules, you're ready to code.

You can do so by hand or via software. Software is quite helpful when you have multiple texts. But it also becomes more vital for you to have developed clear codes, categories, and translation rules, especially if you're looking for implicit terms and concepts. Otherwise, your software-driven analysis may miss key instances of the terms you seek.

7. When you have your text coded, it's time to analyze it.

Look for trends and patterns in your results and use them to draw relevant conclusions about your research subject.

  • Conducting a relational analysis

In a relational analysis, you're examining the relationship between different terms that appear in your text(s). To do so requires you to code your texts in a similar fashion as in a relational analysis. However, depending on the type of relational analysis you're trying to conduct, you may need to follow slightly different rules.

Three types of relational analyses are commonly used: affect extraction , proximity analysis , and cognitive mapping .

Affect extraction

This type of relational analysis involves evaluating the different emotional concepts found in a specific text. While the insights from affect extraction can be invaluable, conducting it may prove difficult depending on the text. For example, if the text captures people's emotional states at different times and from different populations, you may find it difficult to compare them and draw appropriate inferences.

Proximity analysis

A relatively simpler analytical approach than affect extraction, proximity analysis assesses the co-occurrence of explicit concepts in a text. You can create what's known as a concept matrix, which is a group of interrelated co-occurring concepts. Concept matrices help evaluate and determine the overall meaning of a text or the identification of a secondary message or theme.

Cognitive mapping

You can use cognitive mapping as a way to visualize the results of either affect extraction or proximity analysis. This technique uses affect extraction or proximity analysis results to create a graphic map illustrating the relationship between co-occurring emotions or concepts.

To conduct a relational analysis, you must start by determining the type of analysis that best fits the study: affect extraction or proximity analysis. 

Complete steps one through six as outlined above. When it comes to the seventh step, analyze the text according to the relational analysis type they've chosen. During this step, feel free to use cognitive mapping to help draw inferences and conclusions about the relationships between co-occurring emotions or concepts. And use other tools, such as mental modeling and decision mapping as necessary, to analyze the results.

  • The advantages of content analysis

Content analysis provides researchers with a robust and inexpensive method to qualitatively and quantitatively analyze a text. By coding the data, you can perform statistical analyses of the data to affirm and reinforce conclusions you may draw. And content analysis can provide helpful insights into language use, behavioral patterns, and historical or cultural conventions that can be valuable beyond the scope of the initial study.

When content analyses are applied to interview data, the approach provides a way to closely analyze data without needing interview-subject interaction, which can be helpful in certain contexts. For example, suppose you want to analyze the perceptions of a group of geographically diverse individuals. In this case, you can conduct a content analysis of existing interview transcripts rather than assuming the time and expense of conducting new interviews.

What is meant by content analysis?

Content analysis is a research method that helps a researcher explore the occurrence of and relationships between various words, phrases, themes, or concepts in a text or set of texts. The method allows researchers in different disciplines to conduct qualitative and quantitative analyses on a variety of texts.

Where is content analysis used?

Content analysis is used in multiple disciplines, as you can use it to evaluate a variety of texts. You can find applications in anthropology, communications, history, linguistics, literary studies, marketing, political science, psychology, and sociology, among other disciplines.

What are the two types of content analysis?

Content analysis may be either conceptual or relational. In a conceptual analysis, researchers examine a text for the presence and frequency of specific words, phrases, themes, and concepts. In a relational analysis, researchers draw inferences and conclusions about the nature of the relationships of co-occurring words, phrases, themes, and concepts in a text.

What's the difference between content analysis and thematic analysis?

Content analysis typically uses a descriptive approach to the data and may use either qualitative or quantitative analytical methods. By contrast, a thematic analysis only uses qualitative methods to explore frequently occurring themes in a text.

Get started today

Go from raw data to valuable insights with a flexible research platform

Editor’s picks

Last updated: 21 December 2023

Last updated: 16 December 2023

Last updated: 6 October 2023

Last updated: 25 November 2023

Last updated: 12 May 2023

Last updated: 15 February 2024

Last updated: 11 March 2024

Last updated: 12 December 2023

Last updated: 18 May 2023

Last updated: 6 March 2024

Last updated: 10 April 2023

Last updated: 20 December 2023

Latest articles

Related topics, log in or sign up.

Get started for free

Reference management. Clean and simple.

How to do a content analysis

Content analysis illustration

What is content analysis?

Why would you use a content analysis, types of content analysis, conceptual content analysis, relational content analysis, reliability and validity, reliability, the advantages and disadvantages of content analysis, a step-by-step guide to conducting a content analysis, step 1: develop your research questions, step 2: choose the content you’ll analyze, step 3: identify your biases, step 4: define the units and categories of coding, step 5: develop a coding scheme, step 6: code the content, step 7: analyze the results, frequently asked questions about content analysis, related articles.

In research, content analysis is the process of analyzing content and its features with the aim of identifying patterns and the presence of words, themes, and concepts within the content. Simply put, content analysis is a research method that aims to present the trends, patterns, concepts, and ideas in content as objective, quantitative or qualitative data , depending on the specific use case.

As such, some of the objectives of content analysis include:

  • Simplifying complex, unstructured content.
  • Identifying trends, patterns, and relationships in the content.
  • Determining the characteristics of the content.
  • Identifying the intentions of individuals through the analysis of the content.
  • Identifying the implied aspects in the content.

Typically, when doing a content analysis, you’ll gather data not only from written text sources like newspapers, books, journals, and magazines but also from a variety of other oral and visual sources of content like:

  • Voice recordings, speeches, and interviews.
  • Web content, blogs, and social media content.
  • Films, videos, and photographs.

One of content analysis’s distinguishing features is that you'll be able to gather data for research without physically gathering data from participants. In other words, when doing a content analysis, you don't need to interact with people directly.

The process of doing a content analysis usually involves categorizing or coding concepts, words, and themes within the content and analyzing the results. We’ll look at the process in more detail below.

Typically, you’ll use content analysis when you want to:

  • Identify the intentions, communication trends, or communication patterns of an individual, a group of people, or even an institution.
  • Analyze and describe the behavioral and attitudinal responses of individuals to communications.
  • Determine the emotional or psychological state of an individual or a group of people.
  • Analyze the international differences in communication content.
  • Analyzing audience responses to content.

Keep in mind, though, that these are just some examples of use cases where a content analysis might be appropriate and there are many others.

The key thing to remember is that content analysis will help you quantify the occurrence of specific words, phrases, themes, and concepts in content. Moreover, it can also be used when you want to make qualitative inferences out of the data by analyzing the semantic meanings and interrelationships between words, themes, and concepts.

In general, there are two types of content analysis: conceptual and relational analysis . Although these two types follow largely similar processes, their outcomes differ. As such, each of these types can provide different results, interpretations, and conclusions. With that in mind, let’s now look at these two types of content analysis in more detail.

With conceptual analysis, you’ll determine the existence of certain concepts within the content and identify their frequency. In other words, conceptual analysis involves the number of times a specific concept appears in the content.

Conceptual analysis is typically focused on explicit data, which means you’ll focus your analysis on a specific concept to identify its presence in the content and determine its frequency.

However, when conducting a content analysis, you can also use implicit data. This approach is more involved, complicated, and requires the use of a dictionary, contextual translation rules, or a combination of both.

No matter what type you use, conceptual analysis brings an element of quantitive analysis into a qualitative approach to research.

Relational content analysis takes conceptual analysis a step further. So, while the process starts in the same way by identifying concepts in content, it doesn’t focus on finding the frequency of these concepts, but rather on the relationships between the concepts, the context in which they appear in the content, and their interrelationships.

Before starting with a relational analysis, you’ll first need to decide on which subcategory of relational analysis you’ll use:

  • Affect extraction: With this relational content analysis approach, you’ll evaluate concepts based on their emotional attributes. You’ll typically assess these emotions on a rating scale with higher values assigned to positive emotions and lower values to negative ones. In turn, this allows you to capture the emotions of the writer or speaker at the time the content is created. The main difficulty with this approach is that emotions can differ over time and across populations.
  • Proximity analysis: With this approach, you’ll identify concepts as in conceptual analysis, but you’ll evaluate the way in which they occur together in the content. In other words, proximity analysis allows you to analyze the relationship between concepts and derive a concept matrix from which you’ll be able to develop meaning. Proximity analysis is typically used when you want to extract facts from the content rather than contextual, emotional, or cultural factors.
  • Cognitive mapping: Finally, cognitive mapping can be used with affect extraction or proximity analysis. It’s a visualization technique that allows you to create a model that represents the overall meaning of content and presents it as a graphic map of the relationships between concepts. As such, it’s also commonly used when analyzing the changes in meanings, definitions, and terms over time.

Now that we’ve seen what content analysis is and looked at the different types of content analysis, it’s important to understand how reliable it is as a research method . We’ll also look at what criteria impact the validity of a content analysis.

There are three criteria that determine the reliability of a content analysis:

  • Stability . Stability refers to the tendency of coders to consistently categorize or code the same data in the same way over time.
  • Reproducibility . This criterion refers to the tendency of coders to classify categories membership in the same way.
  • Accuracy . Accuracy refers to the extent to which the classification of content corresponds to a specific standard.

Keep in mind, though, that because you’ll need to code or categorize the concepts you’ll aim to identify and analyze manually, you’ll never be able to eliminate human error. However, you’ll be able to minimize it.

In turn, three criteria determine the validity of a content analysis:

  • Closeness of categories . This is achieved by using multiple classifiers to get an agreed-upon definition for a specific category by using either implicit variables or synonyms. In this way, the category can be broadened to include more relevant data.
  • Conclusions . Here, it’s crucial to decide what level of implication will be allowable. In other words, it’s important to consider whether the conclusions are valid based on the data or whether they can be explained using some other phenomena.
  • Generalizability of the results of the analysis to a theory . Generalizability comes down to how you determine your categories as mentioned above and how reliable those categories are. In turn, this relies on how accurately the categories are at measuring the concepts or ideas that you’re looking to measure.

Considering everything mentioned above, there are definite advantages and disadvantages when it comes to content analysis:

Let’s now look at the steps you’ll need to follow when doing a content analysis.

The first step will always be to formulate your research questions. This is simply because, without clear and defined research questions, you won’t know what question to answer and, by implication, won’t be able to code your concepts.

Based on your research questions, you’ll then need to decide what content you’ll analyze. Here, you’ll use three factors to find the right content:

  • The type of content . Here you’ll need to consider the various types of content you’ll use and their medium like, for example, blog posts, social media, newspapers, or online articles.
  • What criteria you’ll use for inclusion . Here you’ll decide what criteria you’ll use to include content. This can, for instance, be the mentioning of a certain event or advertising a specific product.
  • Your parameters . Here, you’ll decide what content you’ll include based on specified parameters in terms of date and location.

The next step is to consider your own pre-conception of the questions and identify your biases. This process is referred to as bracketing and allows you to be aware of your biases before you start your research with the result that they’ll be less likely to influence the analysis.

Your next step would be to define the units of meaning that you’ll code. This will, for example, be the number of times a concept appears in the content or the treatment of concept, words, or themes in the content. You’ll then need to define the set of categories you’ll use for coding which can be either objective or more conceptual.

Based on the above, you’ll then organize the units of meaning into your defined categories. Apart from this, your coding scheme will also determine how you’ll analyze the data.

The next step is to code the content. During this process, you’ll work through the content and record the data according to your coding scheme. It’s also here where conceptual and relational analysis starts to deviate in relation to the process you’ll need to follow.

As mentioned earlier, conceptual analysis aims to identify the number of times a specific concept, idea, word, or phrase appears in the content. So, here, you’ll need to decide what level of analysis you’ll implement.

In contrast, with relational analysis, you’ll need to decide what type of relational analysis you’ll use. So, you’ll need to determine whether you’ll use affect extraction, proximity analysis, cognitive mapping, or a combination of these approaches.

Once you’ve coded the data, you’ll be able to analyze it and draw conclusions from the data based on your research questions.

Content analysis offers an inexpensive and flexible way to identify trends and patterns in communication content. In addition, it’s unobtrusive which eliminates many ethical concerns and inaccuracies in research data. However, to be most effective, a content analysis must be planned and used carefully in order to ensure reliability and validity.

The two general types of content analysis: conceptual and relational analysis . Although these two types follow largely similar processes, their outcomes differ. As such, each of these types can provide different results, interpretations, and conclusions.

In qualitative research coding means categorizing concepts, words, and themes within your content to create a basis for analyzing the results. While coding, you work through the content and record the data according to your coding scheme.

Content analysis is the process of analyzing content and its features with the aim of identifying patterns and the presence of words, themes, and concepts within the content. The goal of a content analysis is to present the trends, patterns, concepts, and ideas in content as objective, quantitative or qualitative data, depending on the specific use case.

Content analysis is a qualitative method of data analysis and can be used in many different fields. It is particularly popular in the social sciences.

It is possible to do qualitative analysis without coding, but content analysis as a method of qualitative analysis requires coding or categorizing data to then analyze it according to your coding scheme in the next step.

meaning of content analysis in research

Grad Coach

What Is Qualitative Content Analysis?

Qca explained simply (with examples).

By: Jenna Crosley (PhD). Reviewed by: Dr Eunice Rautenbach (DTech) | February 2021

If you’re in the process of preparing for your dissertation, thesis or research project, you’ve probably encountered the term “ qualitative content analysis ” – it’s quite a mouthful. If you’ve landed on this post, you’re probably a bit confused about it. Well, the good news is that you’ve come to the right place…

Overview: Qualitative Content Analysis

  • What (exactly) is qualitative content analysis
  • The two main types of content analysis
  • When to use content analysis
  • How to conduct content analysis (the process)
  • The advantages and disadvantages of content analysis

1. What is content analysis?

Content analysis is a  qualitative analysis method  that focuses on recorded human artefacts such as manuscripts, voice recordings and journals. Content analysis investigates these written, spoken and visual artefacts without explicitly extracting data from participants – this is called  unobtrusive  research.

In other words, with content analysis, you don’t necessarily need to interact with participants (although you can if necessary); you can simply analyse the data that they have already produced. With this type of analysis, you can analyse data such as text messages, books, Facebook posts, videos, and audio (just to mention a few).

The basics – explicit and implicit content

When working with content analysis, explicit and implicit content will play a role. Explicit data is transparent and easy to identify, while implicit data is that which requires some form of interpretation and is often of a subjective nature. Sounds a bit fluffy? Here’s an example:

Joe: Hi there, what can I help you with? 

Lauren: I recently adopted a puppy and I’m worried that I’m not feeding him the right food. Could you please advise me on what I should be feeding? 

Joe: Sure, just follow me and I’ll show you. Do you have any other pets?

Lauren: Only one, and it tweets a lot!

In this exchange, the explicit data indicates that Joe is helping Lauren to find the right puppy food. Lauren asks Joe whether she has any pets aside from her puppy. This data is explicit because it requires no interpretation.

On the other hand, implicit data , in this case, includes the fact that the speakers are in a pet store. This information is not clearly stated but can be inferred from the conversation, where Joe is helping Lauren to choose pet food. An additional piece of implicit data is that Lauren likely has some type of bird as a pet. This can be inferred from the way that Lauren states that her pet “tweets”.

As you can see, explicit and implicit data both play a role in human interaction  and are an important part of your analysis. However, it’s important to differentiate between these two types of data when you’re undertaking content analysis. Interpreting implicit data can be rather subjective as conclusions are based on the researcher’s interpretation. This can introduce an element of bias , which risks skewing your results.

Explicit and implicit data both play an important role in your content analysis, but it’s important to differentiate between them.

2. The two types of content analysis

Now that you understand the difference between implicit and explicit data, let’s move on to the two general types of content analysis : conceptual and relational content analysis. Importantly, while conceptual and relational content analysis both follow similar steps initially, the aims and outcomes of each are different.

Conceptual analysis focuses on the number of times a concept occurs in a set of data and is generally focused on explicit data. For example, if you were to have the following conversation:

Marie: She told me that she has three cats.

Jean: What are her cats’ names?

Marie: I think the first one is Bella, the second one is Mia, and… I can’t remember the third cat’s name.

In this data, you can see that the word “cat” has been used three times. Through conceptual content analysis, you can deduce that cats are the central topic of the conversation. You can also perform a frequency analysis , where you assess the term’s frequency in the data. For example, in the exchange above, the word “cat” makes up 9% of the data. In other words, conceptual analysis brings a little bit of quantitative analysis into your qualitative analysis.

As you can see, the above data is without interpretation and focuses on explicit data . Relational content analysis, on the other hand, takes a more holistic view by focusing more on implicit data in terms of context, surrounding words and relationships.

There are three types of relational analysis:

  • Affect extraction
  • Proximity analysis
  • Cognitive mapping

Affect extraction is when you assess concepts according to emotional attributes. These emotions are typically mapped on scales, such as a Likert scale or a rating scale ranging from 1 to 5, where 1 is “very sad” and 5 is “very happy”.

If participants are talking about their achievements, they are likely to be given a score of 4 or 5, depending on how good they feel about it. If a participant is describing a traumatic event, they are likely to have a much lower score, either 1 or 2.

Proximity analysis identifies explicit terms (such as those found in a conceptual analysis) and the patterns in terms of how they co-occur in a text. In other words, proximity analysis investigates the relationship between terms and aims to group these to extract themes and develop meaning.

Proximity analysis is typically utilised when you’re looking for hard facts rather than emotional, cultural, or contextual factors. For example, if you were to analyse a political speech, you may want to focus only on what has been said, rather than implications or hidden meanings. To do this, you would make use of explicit data, discounting any underlying meanings and implications of the speech.

Lastly, there’s cognitive mapping, which can be used in addition to, or along with, proximity analysis. Cognitive mapping involves taking different texts and comparing them in a visual format – i.e. a cognitive map. Typically, you’d use cognitive mapping in studies that assess changes in terms, definitions, and meanings over time. It can also serve as a way to visualise affect extraction or proximity analysis and is often presented in a form such as a graphic map.

Example of a cognitive map

To recap on the essentials, content analysis is a qualitative analysis method that focuses on recorded human artefacts . It involves both conceptual analysis (which is more numbers-based) and relational analysis (which focuses on the relationships between concepts and how they’re connected).

Need a helping hand?

meaning of content analysis in research

3. When should you use content analysis?

Content analysis is a useful tool that provides insight into trends of communication . For example, you could use a discussion forum as the basis of your analysis and look at the types of things the members talk about as well as how they use language to express themselves. Content analysis is flexible in that it can be applied to the individual, group, and institutional level.

Content analysis is typically used in studies where the aim is to better understand factors such as behaviours, attitudes, values, emotions, and opinions . For example, you could use content analysis to investigate an issue in society, such as miscommunication between cultures. In this example, you could compare patterns of communication in participants from different cultures, which will allow you to create strategies for avoiding misunderstandings in intercultural interactions.

Another example could include conducting content analysis on a publication such as a book. Here you could gather data on the themes, topics, language use and opinions reflected in the text to draw conclusions regarding the political (such as conservative or liberal) leanings of the publication.

Content analysis is typically used in projects where the research aims involve getting a better understanding of factors such as behaviours, attitudes, values, emotions, and opinions.

4. How to conduct a qualitative content analysis

Conceptual and relational content analysis differ in terms of their exact process ; however, there are some similarities. Let’s have a look at these first – i.e., the generic process:

  • Recap on your research questions
  • Undertake bracketing to identify biases
  • Operationalise your variables and develop a coding scheme
  • Code the data and undertake your analysis

Step 1 – Recap on your research questions

It’s always useful to begin a project with research questions , or at least with an idea of what you are looking for. In fact, if you’ve spent time reading this blog, you’ll know that it’s useful to recap on your research questions, aims and objectives when undertaking pretty much any research activity. In the context of content analysis, it’s difficult to know what needs to be coded and what doesn’t, without a clear view of the research questions.

For example, if you were to code a conversation focused on basic issues of social justice, you may be met with a wide range of topics that may be irrelevant to your research. However, if you approach this data set with the specific intent of investigating opinions on gender issues, you will be able to focus on this topic alone, which would allow you to code only what you need to investigate.

With content analysis, it’s difficult to know what needs to be coded  without a clear view of the research questions.

Step 2 – Reflect on your personal perspectives and biases

It’s vital that you reflect on your own pre-conception of the topic at hand and identify the biases that you might drag into your content analysis – this is called “ bracketing “. By identifying this upfront, you’ll be more aware of them and less likely to have them subconsciously influence your analysis.

For example, if you were to investigate how a community converses about unequal access to healthcare, it is important to assess your views to ensure that you don’t project these onto your understanding of the opinions put forth by the community. If you have access to medical aid, for instance, you should not allow this to interfere with your examination of unequal access.

You must reflect on the preconceptions and biases that you might drag into your content analysis - this is called "bracketing".

Step 3 – Operationalise your variables and develop a coding scheme

Next, you need to operationalise your variables . But what does that mean? Simply put, it means that you have to define each variable or construct . Give every item a clear definition – what does it mean (include) and what does it not mean (exclude). For example, if you were to investigate children’s views on healthy foods, you would first need to define what age group/range you’re looking at, and then also define what you mean by “healthy foods”.

In combination with the above, it is important to create a coding scheme , which will consist of information about your variables (how you defined each variable), as well as a process for analysing the data. For this, you would refer back to how you operationalised/defined your variables so that you know how to code your data.

For example, when coding, when should you code a food as “healthy”? What makes a food choice healthy? Is it the absence of sugar or saturated fat? Is it the presence of fibre and protein? It’s very important to have clearly defined variables to achieve consistent coding – without this, your analysis will get very muddy, very quickly.

When operationalising your variables, you must give every item a clear definition. In other words, what does it mean (include) and what does it not mean (exclude).

Step 4 – Code and analyse the data

The next step is to code the data. At this stage, there are some differences between conceptual and relational analysis.

As described earlier in this post, conceptual analysis looks at the existence and frequency of concepts, whereas a relational analysis looks at the relationships between concepts. For both types of analyses, it is important to pre-select a concept that you wish to assess in your data. Using the example of studying children’s views on healthy food, you could pre-select the concept of “healthy food” and assess the number of times the concept pops up in your data.

Here is where conceptual and relational analysis start to differ.

At this stage of conceptual analysis , it is necessary to decide on the level of analysis you’ll perform on your data, and whether this will exist on the word, phrase, sentence, or thematic level. For example, will you code the phrase “healthy food” on its own? Will you code each term relating to healthy food (e.g., broccoli, peaches, bananas, etc.) with the code “healthy food” or will these be coded individually? It is very important to establish this from the get-go to avoid inconsistencies that could result in you having to code your data all over again.

On the other hand, relational analysis looks at the type of analysis. So, will you use affect extraction? Proximity analysis? Cognitive mapping? A mix? It’s vital to determine the type of analysis before you begin to code your data so that you can maintain the reliability and validity of your research .

meaning of content analysis in research

How to conduct conceptual analysis

First, let’s have a look at the process for conceptual analysis.

Once you’ve decided on your level of analysis, you need to establish how you will code your concepts, and how many of these you want to code. Here you can choose whether you want to code in a deductive or inductive manner. Just to recap, deductive coding is when you begin the coding process with a set of pre-determined codes, whereas inductive coding entails the codes emerging as you progress with the coding process. Here it is also important to decide what should be included and excluded from your analysis, and also what levels of implication you wish to include in your codes.

For example, if you have the concept of “tall”, can you include “up in the clouds”, derived from the sentence, “the giraffe’s head is up in the clouds” in the code, or should it be a separate code? In addition to this, you need to know what levels of words may be included in your codes or not. For example, if you say, “the panda is cute” and “look at the panda’s cuteness”, can “cute” and “cuteness” be included under the same code?

Once you’ve considered the above, it’s time to code the text . We’ve already published a detailed post about coding , so we won’t go into that process here. Once you’re done coding, you can move on to analysing your results. This is where you will aim to find generalisations in your data, and thus draw your conclusions .

How to conduct relational analysis

Now let’s return to relational analysis.

As mentioned, you want to look at the relationships between concepts . To do this, you’ll need to create categories by reducing your data (in other words, grouping similar concepts together) and then also code for words and/or patterns. These are both done with the aim of discovering whether these words exist, and if they do, what they mean.

Your next step is to assess your data and to code the relationships between your terms and meanings, so that you can move on to your final step, which is to sum up and analyse the data.

To recap, it’s important to start your analysis process by reviewing your research questions and identifying your biases . From there, you need to operationalise your variables, code your data and then analyse it.

Time to analyse

5. What are the pros & cons of content analysis?

One of the main advantages of content analysis is that it allows you to use a mix of quantitative and qualitative research methods, which results in a more scientifically rigorous analysis.

For example, with conceptual analysis, you can count the number of times that a term or a code appears in a dataset, which can be assessed from a quantitative standpoint. In addition to this, you can then use a qualitative approach to investigate the underlying meanings of these and relationships between them.

Content analysis is also unobtrusive and therefore poses fewer ethical issues than some other analysis methods. As the content you’ll analyse oftentimes already exists, you’ll analyse what has been produced previously, and so you won’t have to collect data directly from participants. When coded correctly, data is analysed in a very systematic and transparent manner, which means that issues of replicability (how possible it is to recreate research under the same conditions) are reduced greatly.

On the downside , qualitative research (in general, not just content analysis) is often critiqued for being too subjective and for not being scientifically rigorous enough. This is where reliability (how replicable a study is by other researchers) and validity (how suitable the research design is for the topic being investigated) come into play – if you take these into account, you’ll be on your way to achieving sound research results.

One of the main advantages of content analysis is that it allows you to use a mix of quantitative and qualitative research methods, which results in a more scientifically rigorous analysis.

Recap: Qualitative content analysis

In this post, we’ve covered a lot of ground – click on any of the sections to recap:

If you have any questions about qualitative content analysis, feel free to leave a comment below. If you’d like 1-on-1 help with your qualitative content analysis, be sure to book an initial consultation with one of our friendly Research Coaches.

meaning of content analysis in research

Psst... there’s more!

This post was based on one of our popular Research Bootcamps . If you're working on a research project, you'll definitely want to check this out ...

You Might Also Like:

Narrative analysis explainer

14 Comments

Abhishek

If I am having three pre-decided attributes for my research based on which a set of semi-structured questions where asked then should I conduct a conceptual content analysis or relational content analysis. please note that all three attributes are different like Agility, Resilience and AI.

Ofori Henry Affum

Thank you very much. I really enjoyed every word.

Janak Raj Bhatta

please send me one/ two sample of content analysis

pravin

send me to any sample of qualitative content analysis as soon as possible

abdellatif djedei

Many thanks for the brilliant explanation. Do you have a sample practical study of a foreign policy using content analysis?

DR. TAPAS GHOSHAL

1) It will be very much useful if a small but complete content analysis can be sent, from research question to coding and analysis. 2) Is there any software by which qualitative content analysis can be done?

Carkanirta

Common software for qualitative analysis is nVivo, and quantitative analysis is IBM SPSS

carmely

Thank you. Can I have at least 2 copies of a sample analysis study as my reference?

Yang

Could you please send me some sample of textbook content analysis?

Abdoulie Nyassi

Can I send you my research topic, aims, objectives and questions to give me feedback on them?

Bobby Benjamin Simeon

please could you send me samples of content analysis?

Obi Clara Chisom

Yes please send

Gaid Ahmed

really we enjoyed your knowledge thanks allot. from Ethiopia

Ary

can you please share some samples of content analysis(relational)? I am a bit confused about processing the analysis part

Submit a Comment Cancel reply

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

  • Print Friendly
  • Search Menu
  • Browse content in Arts and Humanities
  • Browse content in Archaeology
  • Anglo-Saxon and Medieval Archaeology
  • Archaeological Methodology and Techniques
  • Archaeology by Region
  • Archaeology of Religion
  • Archaeology of Trade and Exchange
  • Biblical Archaeology
  • Contemporary and Public Archaeology
  • Environmental Archaeology
  • Historical Archaeology
  • History and Theory of Archaeology
  • Industrial Archaeology
  • Landscape Archaeology
  • Mortuary Archaeology
  • Prehistoric Archaeology
  • Underwater Archaeology
  • Urban Archaeology
  • Zooarchaeology
  • Browse content in Architecture
  • Architectural Structure and Design
  • History of Architecture
  • Residential and Domestic Buildings
  • Theory of Architecture
  • Browse content in Art
  • Art Subjects and Themes
  • History of Art
  • Industrial and Commercial Art
  • Theory of Art
  • Biographical Studies
  • Byzantine Studies
  • Browse content in Classical Studies
  • Classical Literature
  • Classical Reception
  • Classical History
  • Classical Philosophy
  • Classical Mythology
  • Classical Art and Architecture
  • Classical Oratory and Rhetoric
  • Greek and Roman Papyrology
  • Greek and Roman Archaeology
  • Greek and Roman Epigraphy
  • Greek and Roman Law
  • Late Antiquity
  • Religion in the Ancient World
  • Digital Humanities
  • Browse content in History
  • Colonialism and Imperialism
  • Diplomatic History
  • Environmental History
  • Genealogy, Heraldry, Names, and Honours
  • Genocide and Ethnic Cleansing
  • Historical Geography
  • History by Period
  • History of Emotions
  • History of Agriculture
  • History of Education
  • History of Gender and Sexuality
  • Industrial History
  • Intellectual History
  • International History
  • Labour History
  • Legal and Constitutional History
  • Local and Family History
  • Maritime History
  • Military History
  • National Liberation and Post-Colonialism
  • Oral History
  • Political History
  • Public History
  • Regional and National History
  • Revolutions and Rebellions
  • Slavery and Abolition of Slavery
  • Social and Cultural History
  • Theory, Methods, and Historiography
  • Urban History
  • World History
  • Browse content in Language Teaching and Learning
  • Language Learning (Specific Skills)
  • Language Teaching Theory and Methods
  • Browse content in Linguistics
  • Applied Linguistics
  • Cognitive Linguistics
  • Computational Linguistics
  • Forensic Linguistics
  • Grammar, Syntax and Morphology
  • Historical and Diachronic Linguistics
  • History of English
  • Language Evolution
  • Language Reference
  • Language Variation
  • Language Families
  • Language Acquisition
  • Lexicography
  • Linguistic Anthropology
  • Linguistic Theories
  • Linguistic Typology
  • Phonetics and Phonology
  • Psycholinguistics
  • Sociolinguistics
  • Translation and Interpretation
  • Writing Systems
  • Browse content in Literature
  • Bibliography
  • Children's Literature Studies
  • Literary Studies (Romanticism)
  • Literary Studies (American)
  • Literary Studies (Modernism)
  • Literary Studies (Asian)
  • Literary Studies (European)
  • Literary Studies (Eco-criticism)
  • Literary Studies - World
  • Literary Studies (1500 to 1800)
  • Literary Studies (19th Century)
  • Literary Studies (20th Century onwards)
  • Literary Studies (African American Literature)
  • Literary Studies (British and Irish)
  • Literary Studies (Early and Medieval)
  • Literary Studies (Fiction, Novelists, and Prose Writers)
  • Literary Studies (Gender Studies)
  • Literary Studies (Graphic Novels)
  • Literary Studies (History of the Book)
  • Literary Studies (Plays and Playwrights)
  • Literary Studies (Poetry and Poets)
  • Literary Studies (Postcolonial Literature)
  • Literary Studies (Queer Studies)
  • Literary Studies (Science Fiction)
  • Literary Studies (Travel Literature)
  • Literary Studies (War Literature)
  • Literary Studies (Women's Writing)
  • Literary Theory and Cultural Studies
  • Mythology and Folklore
  • Shakespeare Studies and Criticism
  • Browse content in Media Studies
  • Browse content in Music
  • Applied Music
  • Dance and Music
  • Ethics in Music
  • Ethnomusicology
  • Gender and Sexuality in Music
  • Medicine and Music
  • Music Cultures
  • Music and Media
  • Music and Culture
  • Music and Religion
  • Music Education and Pedagogy
  • Music Theory and Analysis
  • Musical Scores, Lyrics, and Libretti
  • Musical Structures, Styles, and Techniques
  • Musicology and Music History
  • Performance Practice and Studies
  • Race and Ethnicity in Music
  • Sound Studies
  • Browse content in Performing Arts
  • Browse content in Philosophy
  • Aesthetics and Philosophy of Art
  • Epistemology
  • Feminist Philosophy
  • History of Western Philosophy
  • Metaphysics
  • Moral Philosophy
  • Non-Western Philosophy
  • Philosophy of Language
  • Philosophy of Mind
  • Philosophy of Perception
  • Philosophy of Action
  • Philosophy of Law
  • Philosophy of Religion
  • Philosophy of Science
  • Philosophy of Mathematics and Logic
  • Practical Ethics
  • Social and Political Philosophy
  • Browse content in Religion
  • Biblical Studies
  • Christianity
  • East Asian Religions
  • History of Religion
  • Judaism and Jewish Studies
  • Qumran Studies
  • Religion and Education
  • Religion and Health
  • Religion and Politics
  • Religion and Science
  • Religion and Law
  • Religion and Art, Literature, and Music
  • Religious Studies
  • Browse content in Society and Culture
  • Cookery, Food, and Drink
  • Cultural Studies
  • Customs and Traditions
  • Ethical Issues and Debates
  • Hobbies, Games, Arts and Crafts
  • Lifestyle, Home, and Garden
  • Natural world, Country Life, and Pets
  • Popular Beliefs and Controversial Knowledge
  • Sports and Outdoor Recreation
  • Technology and Society
  • Travel and Holiday
  • Visual Culture
  • Browse content in Law
  • Arbitration
  • Browse content in Company and Commercial Law
  • Commercial Law
  • Company Law
  • Browse content in Comparative Law
  • Systems of Law
  • Competition Law
  • Browse content in Constitutional and Administrative Law
  • Government Powers
  • Judicial Review
  • Local Government Law
  • Military and Defence Law
  • Parliamentary and Legislative Practice
  • Construction Law
  • Contract Law
  • Browse content in Criminal Law
  • Criminal Procedure
  • Criminal Evidence Law
  • Sentencing and Punishment
  • Employment and Labour Law
  • Environment and Energy Law
  • Browse content in Financial Law
  • Banking Law
  • Insolvency Law
  • History of Law
  • Human Rights and Immigration
  • Intellectual Property Law
  • Browse content in International Law
  • Private International Law and Conflict of Laws
  • Public International Law
  • IT and Communications Law
  • Jurisprudence and Philosophy of Law
  • Law and Society
  • Law and Politics
  • Browse content in Legal System and Practice
  • Courts and Procedure
  • Legal Skills and Practice
  • Primary Sources of Law
  • Regulation of Legal Profession
  • Medical and Healthcare Law
  • Browse content in Policing
  • Criminal Investigation and Detection
  • Police and Security Services
  • Police Procedure and Law
  • Police Regional Planning
  • Browse content in Property Law
  • Personal Property Law
  • Study and Revision
  • Terrorism and National Security Law
  • Browse content in Trusts Law
  • Wills and Probate or Succession
  • Browse content in Medicine and Health
  • Browse content in Allied Health Professions
  • Arts Therapies
  • Clinical Science
  • Dietetics and Nutrition
  • Occupational Therapy
  • Operating Department Practice
  • Physiotherapy
  • Radiography
  • Speech and Language Therapy
  • Browse content in Anaesthetics
  • General Anaesthesia
  • Neuroanaesthesia
  • Clinical Neuroscience
  • Browse content in Clinical Medicine
  • Acute Medicine
  • Cardiovascular Medicine
  • Clinical Genetics
  • Clinical Pharmacology and Therapeutics
  • Dermatology
  • Endocrinology and Diabetes
  • Gastroenterology
  • Genito-urinary Medicine
  • Geriatric Medicine
  • Infectious Diseases
  • Medical Toxicology
  • Medical Oncology
  • Pain Medicine
  • Palliative Medicine
  • Rehabilitation Medicine
  • Respiratory Medicine and Pulmonology
  • Rheumatology
  • Sleep Medicine
  • Sports and Exercise Medicine
  • Community Medical Services
  • Critical Care
  • Emergency Medicine
  • Forensic Medicine
  • Haematology
  • History of Medicine
  • Browse content in Medical Skills
  • Clinical Skills
  • Communication Skills
  • Nursing Skills
  • Surgical Skills
  • Medical Ethics
  • Browse content in Medical Dentistry
  • Oral and Maxillofacial Surgery
  • Paediatric Dentistry
  • Restorative Dentistry and Orthodontics
  • Surgical Dentistry
  • Medical Statistics and Methodology
  • Browse content in Neurology
  • Clinical Neurophysiology
  • Neuropathology
  • Nursing Studies
  • Browse content in Obstetrics and Gynaecology
  • Gynaecology
  • Occupational Medicine
  • Ophthalmology
  • Otolaryngology (ENT)
  • Browse content in Paediatrics
  • Neonatology
  • Browse content in Pathology
  • Chemical Pathology
  • Clinical Cytogenetics and Molecular Genetics
  • Histopathology
  • Medical Microbiology and Virology
  • Patient Education and Information
  • Browse content in Pharmacology
  • Psychopharmacology
  • Browse content in Popular Health
  • Caring for Others
  • Complementary and Alternative Medicine
  • Self-help and Personal Development
  • Browse content in Preclinical Medicine
  • Cell Biology
  • Molecular Biology and Genetics
  • Reproduction, Growth and Development
  • Primary Care
  • Professional Development in Medicine
  • Browse content in Psychiatry
  • Addiction Medicine
  • Child and Adolescent Psychiatry
  • Forensic Psychiatry
  • Learning Disabilities
  • Old Age Psychiatry
  • Psychotherapy
  • Browse content in Public Health and Epidemiology
  • Epidemiology
  • Public Health
  • Browse content in Radiology
  • Clinical Radiology
  • Interventional Radiology
  • Nuclear Medicine
  • Radiation Oncology
  • Reproductive Medicine
  • Browse content in Surgery
  • Cardiothoracic Surgery
  • Gastro-intestinal and Colorectal Surgery
  • General Surgery
  • Neurosurgery
  • Paediatric Surgery
  • Peri-operative Care
  • Plastic and Reconstructive Surgery
  • Surgical Oncology
  • Transplant Surgery
  • Trauma and Orthopaedic Surgery
  • Vascular Surgery
  • Browse content in Science and Mathematics
  • Browse content in Biological Sciences
  • Aquatic Biology
  • Biochemistry
  • Bioinformatics and Computational Biology
  • Developmental Biology
  • Ecology and Conservation
  • Evolutionary Biology
  • Genetics and Genomics
  • Microbiology
  • Molecular and Cell Biology
  • Natural History
  • Plant Sciences and Forestry
  • Research Methods in Life Sciences
  • Structural Biology
  • Systems Biology
  • Zoology and Animal Sciences
  • Browse content in Chemistry
  • Analytical Chemistry
  • Computational Chemistry
  • Crystallography
  • Environmental Chemistry
  • Industrial Chemistry
  • Inorganic Chemistry
  • Materials Chemistry
  • Medicinal Chemistry
  • Mineralogy and Gems
  • Organic Chemistry
  • Physical Chemistry
  • Polymer Chemistry
  • Study and Communication Skills in Chemistry
  • Theoretical Chemistry
  • Browse content in Computer Science
  • Artificial Intelligence
  • Computer Architecture and Logic Design
  • Game Studies
  • Human-Computer Interaction
  • Mathematical Theory of Computation
  • Programming Languages
  • Software Engineering
  • Systems Analysis and Design
  • Virtual Reality
  • Browse content in Computing
  • Business Applications
  • Computer Games
  • Computer Security
  • Computer Networking and Communications
  • Digital Lifestyle
  • Graphical and Digital Media Applications
  • Operating Systems
  • Browse content in Earth Sciences and Geography
  • Atmospheric Sciences
  • Environmental Geography
  • Geology and the Lithosphere
  • Maps and Map-making
  • Meteorology and Climatology
  • Oceanography and Hydrology
  • Palaeontology
  • Physical Geography and Topography
  • Regional Geography
  • Soil Science
  • Urban Geography
  • Browse content in Engineering and Technology
  • Agriculture and Farming
  • Biological Engineering
  • Civil Engineering, Surveying, and Building
  • Electronics and Communications Engineering
  • Energy Technology
  • Engineering (General)
  • Environmental Science, Engineering, and Technology
  • History of Engineering and Technology
  • Mechanical Engineering and Materials
  • Technology of Industrial Chemistry
  • Transport Technology and Trades
  • Browse content in Environmental Science
  • Applied Ecology (Environmental Science)
  • Conservation of the Environment (Environmental Science)
  • Environmental Sustainability
  • Environmentalist Thought and Ideology (Environmental Science)
  • Management of Land and Natural Resources (Environmental Science)
  • Natural Disasters (Environmental Science)
  • Nuclear Issues (Environmental Science)
  • Pollution and Threats to the Environment (Environmental Science)
  • Social Impact of Environmental Issues (Environmental Science)
  • History of Science and Technology
  • Browse content in Materials Science
  • Ceramics and Glasses
  • Composite Materials
  • Metals, Alloying, and Corrosion
  • Nanotechnology
  • Browse content in Mathematics
  • Applied Mathematics
  • Biomathematics and Statistics
  • History of Mathematics
  • Mathematical Education
  • Mathematical Finance
  • Mathematical Analysis
  • Numerical and Computational Mathematics
  • Probability and Statistics
  • Pure Mathematics
  • Browse content in Neuroscience
  • Cognition and Behavioural Neuroscience
  • Development of the Nervous System
  • Disorders of the Nervous System
  • History of Neuroscience
  • Invertebrate Neurobiology
  • Molecular and Cellular Systems
  • Neuroendocrinology and Autonomic Nervous System
  • Neuroscientific Techniques
  • Sensory and Motor Systems
  • Browse content in Physics
  • Astronomy and Astrophysics
  • Atomic, Molecular, and Optical Physics
  • Biological and Medical Physics
  • Classical Mechanics
  • Computational Physics
  • Condensed Matter Physics
  • Electromagnetism, Optics, and Acoustics
  • History of Physics
  • Mathematical and Statistical Physics
  • Measurement Science
  • Nuclear Physics
  • Particles and Fields
  • Plasma Physics
  • Quantum Physics
  • Relativity and Gravitation
  • Semiconductor and Mesoscopic Physics
  • Browse content in Psychology
  • Affective Sciences
  • Clinical Psychology
  • Cognitive Psychology
  • Cognitive Neuroscience
  • Criminal and Forensic Psychology
  • Developmental Psychology
  • Educational Psychology
  • Evolutionary Psychology
  • Health Psychology
  • History and Systems in Psychology
  • Music Psychology
  • Neuropsychology
  • Organizational Psychology
  • Psychological Assessment and Testing
  • Psychology of Human-Technology Interaction
  • Psychology Professional Development and Training
  • Research Methods in Psychology
  • Social Psychology
  • Browse content in Social Sciences
  • Browse content in Anthropology
  • Anthropology of Religion
  • Human Evolution
  • Medical Anthropology
  • Physical Anthropology
  • Regional Anthropology
  • Social and Cultural Anthropology
  • Theory and Practice of Anthropology
  • Browse content in Business and Management
  • Business Ethics
  • Business History
  • Business Strategy
  • Business and Technology
  • Business and Government
  • Business and the Environment
  • Comparative Management
  • Corporate Governance
  • Corporate Social Responsibility
  • Entrepreneurship
  • Health Management
  • Human Resource Management
  • Industrial and Employment Relations
  • Industry Studies
  • Information and Communication Technologies
  • International Business
  • Knowledge Management
  • Management and Management Techniques
  • Operations Management
  • Organizational Theory and Behaviour
  • Pensions and Pension Management
  • Public and Nonprofit Management
  • Strategic Management
  • Supply Chain Management
  • Browse content in Criminology and Criminal Justice
  • Criminal Justice
  • Criminology
  • Forms of Crime
  • International and Comparative Criminology
  • Youth Violence and Juvenile Justice
  • Development Studies
  • Browse content in Economics
  • Agricultural, Environmental, and Natural Resource Economics
  • Asian Economics
  • Behavioural Finance
  • Behavioural Economics and Neuroeconomics
  • Econometrics and Mathematical Economics
  • Economic History
  • Economic Methodology
  • Economic Systems
  • Economic Development and Growth
  • Financial Markets
  • Financial Institutions and Services
  • General Economics and Teaching
  • Health, Education, and Welfare
  • History of Economic Thought
  • International Economics
  • Labour and Demographic Economics
  • Law and Economics
  • Macroeconomics and Monetary Economics
  • Microeconomics
  • Public Economics
  • Urban, Rural, and Regional Economics
  • Welfare Economics
  • Browse content in Education
  • Adult Education and Continuous Learning
  • Care and Counselling of Students
  • Early Childhood and Elementary Education
  • Educational Equipment and Technology
  • Educational Strategies and Policy
  • Higher and Further Education
  • Organization and Management of Education
  • Philosophy and Theory of Education
  • Schools Studies
  • Secondary Education
  • Teaching of a Specific Subject
  • Teaching of Specific Groups and Special Educational Needs
  • Teaching Skills and Techniques
  • Browse content in Environment
  • Applied Ecology (Social Science)
  • Climate Change
  • Conservation of the Environment (Social Science)
  • Environmentalist Thought and Ideology (Social Science)
  • Natural Disasters (Environment)
  • Social Impact of Environmental Issues (Social Science)
  • Browse content in Human Geography
  • Cultural Geography
  • Economic Geography
  • Political Geography
  • Browse content in Interdisciplinary Studies
  • Communication Studies
  • Museums, Libraries, and Information Sciences
  • Browse content in Politics
  • African Politics
  • Asian Politics
  • Chinese Politics
  • Comparative Politics
  • Conflict Politics
  • Elections and Electoral Studies
  • Environmental Politics
  • European Union
  • Foreign Policy
  • Gender and Politics
  • Human Rights and Politics
  • Indian Politics
  • International Relations
  • International Organization (Politics)
  • International Political Economy
  • Irish Politics
  • Latin American Politics
  • Middle Eastern Politics
  • Political Behaviour
  • Political Economy
  • Political Institutions
  • Political Theory
  • Political Methodology
  • Political Communication
  • Political Philosophy
  • Political Sociology
  • Politics and Law
  • Public Policy
  • Public Administration
  • Quantitative Political Methodology
  • Regional Political Studies
  • Russian Politics
  • Security Studies
  • State and Local Government
  • UK Politics
  • US Politics
  • Browse content in Regional and Area Studies
  • African Studies
  • Asian Studies
  • East Asian Studies
  • Japanese Studies
  • Latin American Studies
  • Middle Eastern Studies
  • Native American Studies
  • Scottish Studies
  • Browse content in Research and Information
  • Research Methods
  • Browse content in Social Work
  • Addictions and Substance Misuse
  • Adoption and Fostering
  • Care of the Elderly
  • Child and Adolescent Social Work
  • Couple and Family Social Work
  • Developmental and Physical Disabilities Social Work
  • Direct Practice and Clinical Social Work
  • Emergency Services
  • Human Behaviour and the Social Environment
  • International and Global Issues in Social Work
  • Mental and Behavioural Health
  • Social Justice and Human Rights
  • Social Policy and Advocacy
  • Social Work and Crime and Justice
  • Social Work Macro Practice
  • Social Work Practice Settings
  • Social Work Research and Evidence-based Practice
  • Welfare and Benefit Systems
  • Browse content in Sociology
  • Childhood Studies
  • Community Development
  • Comparative and Historical Sociology
  • Economic Sociology
  • Gender and Sexuality
  • Gerontology and Ageing
  • Health, Illness, and Medicine
  • Marriage and the Family
  • Migration Studies
  • Occupations, Professions, and Work
  • Organizations
  • Population and Demography
  • Race and Ethnicity
  • Social Theory
  • Social Movements and Social Change
  • Social Research and Statistics
  • Social Stratification, Inequality, and Mobility
  • Sociology of Religion
  • Sociology of Education
  • Sport and Leisure
  • Urban and Rural Studies
  • Browse content in Warfare and Defence
  • Defence Strategy, Planning, and Research
  • Land Forces and Warfare
  • Military Administration
  • Military Life and Institutions
  • Naval Forces and Warfare
  • Other Warfare and Defence Issues
  • Peace Studies and Conflict Resolution
  • Weapons and Equipment

Content Analysis

  • < Previous
  • Next chapter >

1 Introduction

  • Published: November 2015
  • Cite Icon Cite
  • Permissions Icon Permissions

This chapter offers an inclusive definition of content analysis. This helps in clarifying some key terms and concepts. Three approaches to content analysis are introduced and defined briefly: basic content analysis, interpretive content analysis, and qualitative content analysis. Long-standing differences between quantitative and qualitative approaches to content analysis that are still evident in contemporary published research are also touched on here. In addition, the chapter examines the origins, evolution, and conceptual foundations of content analysis, as well as the development of content analysis in the social work profession. Finally, the chapter offers illustrative examples of different approaches to content analysis to ground the discussion in examples of published research.

Signed in as

Institutional accounts.

  • GoogleCrawler [DO NOT DELETE]
  • Google Scholar Indexing

Personal account

  • Sign in with email/username & password
  • Get email alerts
  • Save searches
  • Purchase content
  • Activate your purchase/trial code

Institutional access

  • Sign in with a library card Sign in with username/password Recommend to your librarian
  • Institutional account management
  • Get help with access

Access to content on Oxford Academic is often provided through institutional subscriptions and purchases. If you are a member of an institution with an active account, you may be able to access content in one of the following ways:

IP based access

Typically, access is provided across an institutional network to a range of IP addresses. This authentication occurs automatically, and it is not possible to sign out of an IP authenticated account.

Sign in through your institution

Choose this option to get remote access when outside your institution. Shibboleth/Open Athens technology is used to provide single sign-on between your institution’s website and Oxford Academic.

  • Click Sign in through your institution.
  • Select your institution from the list provided, which will take you to your institution's website to sign in.
  • When on the institution site, please use the credentials provided by your institution. Do not use an Oxford Academic personal account.
  • Following successful sign in, you will be returned to Oxford Academic.

If your institution is not listed or you cannot sign in to your institution’s website, please contact your librarian or administrator.

Sign in with a library card

Enter your library card number to sign in. If you cannot sign in, please contact your librarian.

Society Members

Society member access to a journal is achieved in one of the following ways:

Sign in through society site

Many societies offer single sign-on between the society website and Oxford Academic. If you see ‘Sign in through society site’ in the sign in pane within a journal:

  • Click Sign in through society site.
  • When on the society site, please use the credentials provided by that society. Do not use an Oxford Academic personal account.

If you do not have a society account or have forgotten your username or password, please contact your society.

Sign in using a personal account

Some societies use Oxford Academic personal accounts to provide access to their members. See below.

A personal account can be used to get email alerts, save searches, purchase content, and activate subscriptions.

Some societies use Oxford Academic personal accounts to provide access to their members.

Viewing your signed in accounts

Click the account icon in the top right to:

  • View your signed in personal account and access account management features.
  • View the institutional accounts that are providing access.

Signed in but can't access content

Oxford Academic is home to a wide variety of products. The institutional subscription may not cover the content that you are trying to access. If you believe you should have access to that content, please contact your librarian.

For librarians and administrators, your personal account also provides access to institutional account management. Here you will find options to view and activate subscriptions, manage institutional settings and access options, access usage statistics, and more.

Our books are available by subscription or purchase to libraries and institutions.

  • About Oxford Academic
  • Publish journals with us
  • University press partners
  • What we publish
  • New features  
  • Open access
  • Rights and permissions
  • Accessibility
  • Advertising
  • Media enquiries
  • Oxford University Press
  • Oxford Languages
  • University of Oxford

Oxford University Press is a department of the University of Oxford. It furthers the University's objective of excellence in research, scholarship, and education by publishing worldwide

  • Copyright © 2024 Oxford University Press
  • Cookie settings
  • Cookie policy
  • Privacy policy
  • Legal notice

This Feature Is Available To Subscribers Only

Sign In or Create an Account

This PDF is available to Subscribers Only

For full access to this pdf, sign in to an existing account, or purchase an annual subscription.

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Am J Pharm Educ
  • v.84(1); 2020 Jan

Demystifying Content Analysis

A. j. kleinheksel.

a The Medical College of Georgia at Augusta University, Augusta, Georgia

Nicole Rockich-Winston

Huda tawfik.

b Central Michigan University, College of Medicine, Mt. Pleasant, Michigan

Tasha R. Wyatt

Objective. In the course of daily teaching responsibilities, pharmacy educators collect rich data that can provide valuable insight into student learning. This article describes the qualitative data analysis method of content analysis, which can be useful to pharmacy educators because of its application in the investigation of a wide variety of data sources, including textual, visual, and audio files.

Findings. Both manifest and latent content analysis approaches are described, with several examples used to illustrate the processes. This article also offers insights into the variety of relevant terms and visualizations found in the content analysis literature. Finally, common threats to the reliability and validity of content analysis are discussed, along with suitable strategies to mitigate these risks during analysis.

Summary. This review of content analysis as a qualitative data analysis method will provide clarity and actionable instruction for both novice and experienced pharmacy education researchers.

INTRODUCTION

The Academy’s growing interest in qualitative research indicates an important shift in the field’s scientific paradigm. Whereas health science researchers have historically looked to quantitative methods to answer their questions, this shift signals that a purely positivist, objective approach is no longer sufficient to answer pharmacy education’s research questions. Educators who want to study their teaching and students’ learning will find content analysis an easily accessible, robust method of qualitative data analysis that can yield rigorous results for both publication and the improvement of their educational practice. Content analysis is a method designed to identify and interpret meaning in recorded forms of communication by isolating small pieces of the data that represent salient concepts and then applying or creating a framework to organize the pieces in a way that can be used to describe or explain a phenomenon. 1 Content analysis is particularly useful in situations where there is a large amount of unanalyzed textual data, such as those many pharmacy educators have already collected as part of their teaching practice. Because of its accessibility, content analysis is also an appropriate qualitative method for pharmacy educators with limited experience in educational research. This article will introduce and illustrate the process of content analysis as a way to analyze existing data, but also as an approach that may lead pharmacy educators to ask new types of research questions.

Content analysis is a well-established data analysis method that has evolved in its treatment of textual data. Content analysis was originally introduced as a strictly quantitative method, recording counts to measure the observed frequency of pre-identified targets in consumer research. 1 However, as the naturalistic qualitative paradigm became more prevalent in social sciences research and researchers became increasingly interested in the way people behave in natural settings, the process of content analysis was adapted into a more interesting and meaningful approach. Content analysis has the potential to be a useful method in pharmacy education because it can help educational researchers develop a deeper understanding of a particular phenomenon by providing structure in a large amount of textual data through a systematic process of interpretation. It also offers potential value because it can help identify problematic areas in student understanding and guide the process of targeted teaching. Several research studies in pharmacy education have used the method of content analysis. 2-7 Two studies in particular offer noteworthy examples: Wallman and colleagues employed manifest content analysis to analyze semi-structured interviews in order to explore what students learn during experiential rotations, 7 while Moser and colleagues adopted latent content analysis to evaluate open-ended survey responses on student perceptions of learning communities. 6 To elaborate on these approaches further, we will describe the two types of qualitative content analysis, manifest and latent, and demonstrate the corresponding analytical processes using examples that illustrate their benefit.

Qualitative Content Analysis

Content analysis rests on the assumption that texts are a rich data source with great potential to reveal valuable information about particular phenomena. 8 It is the process of considering both the participant and context when sorting text into groups of related categories to identify similarities and differences, patterns, and associations, both on the surface and implied within. 9-11 The method is considered high-yield in educational research because it is versatile and can be applied in both qualitative and quantitative studies. 12 While it is important to note that content analysis has application in visual and auditory artifacts (eg, an image or song), for our purposes we will largely focus on the most common application, which is the analysis of textual or transcribed content (eg, open-ended survey responses, print media, interviews, recorded observations, etc). The terminology of content analysis can vary throughout quantitative and qualitative literature, which may lead to some confusion among both novice and experienced researchers. However, there are also several agreed-upon terms and phrases that span the literature, as found in Table 1 .

Terms and Definitions Used in Qualitative Content Analysis

An external file that holds a picture, illustration, etc.
Object name is ajpe7113-t1.jpg

There is more often disagreement on terminology in the methodological approaches to content analysis, though the most common differentiation is between the two types of content: manifest and latent. In much of the literature, manifest content analysis is defined as describing what is occurring on the surface, what is and literally present, and as “staying close to the text.” 8,13 Manifest content analysis is concerned with data that are easily observable both to researchers and the coders who assist in their analyses, without the need to discern intent or identify deeper meaning. It is content that can be recognized and counted with little training. Early applications of manifest analysis focused on identifying easily observable targets within text (eg, the number of instances a certain word appears in newspaper articles), film (eg, the occupation of a character), or interpersonal interactions (eg, tracking the number of times a participant blinks during an interview). 14 This application, in which frequency counts are used to understand a phenomenon, reflects a surface-level analysis and assumes there is objective truth in the data that can be revealed with very little interpretation. The number of times a target (ie, code) appears within the text is used as a way to understand its prevalence. Quantitative content analysis is always describing a positivist manifest content analysis, in that the nature of truth is believed to be objective, observable, and measurable. Qualitative research, which favors the researcher’s interpretation of an individual’s experience, may also be used to analyze manifest content. However, the intent of the application is to describe a dynamic reality that cannot be separated from the lived experiences of the researcher. Although qualitative content analysis can be conducted whether knowledge is thought to be innate, acquired, or socially constructed, the purpose of qualitative manifest content analysis is to transcend simple word counts and delve into a deeper examination of the language in order to organize large amounts of text into categories that reflect a shared meaning. 15,16 The practical distinction between quantitative and qualitative manifest content analysis is the intention behind the analysis. The quantitative method seeks to generate a numerical value to either cite prevalence or use in statistical analyses, while the qualitative method seeks to identify a construct or concept within the text using specific words or phrases for substantiation, or to provide a more organized structure to the text being described.

Latent content analysis is most often defined as interpreting what is hidden deep within the text. In this method, the role of the researcher is to discover the implied meaning in participants’ experiences. 8,13 For example, in a transcribed exchange in an office setting, a participant might say to a coworker, “Yeah, here we are…another Monday. So exciting!” The researcher would apply context in order to discover the emotion being conveyed (ie, the implied meaning). In this example, the comment could be interpreted as genuine, it could be interpreted as a sarcastic comment made in an attempt at humor in order to develop or sustain social bonds with the coworker, or the context might imply that the sarcasm was meant to convey displeasure and end the interaction.

Latent content analysis acknowledges that the researcher is intimately involved in the analytical process and that the their role is to actively use mental schema, theories, and lenses to interpret and understand the data. 10 Whereas manifest analyses are typically conducted in a way that the researcher is thought to maintain distance and separation from the objects of study, latent analyses underscore the importance of the researcher co-creating meaning with the text. 17 Adding nuance to this type of content, Potter and Levine‐Donnerstein argue that within latent content analysis, there are two distinct types: latent pattern and latent projective . 14 Latent pattern content analysis seeks to establish a pattern of characteristics in the text itself, while latent projective content analysis leverages the researcher’s own interpretations of the meaning of the text. While both approaches rely on codes that emerge from the content using the coder’s own perspectives and mental schema, the distinction between these two types of analyses are in their foci. 14 Though we do not agree, some researchers believe that all qualitative content analysis is latent content analysis. 11 These disagreements typically occur where there are differences in intent and where there are areas of overlap in the results. For example, both qualitative manifest and latent pattern content analyses may identify patterns as a result of their application. Though in their research design, the researcher would have approached the content with different methodological approaches, with a manifest approach seeking only to describe what is observed, and the latent pattern approach seeking to discover an unseen pattern. At this point, these distinctions may seem too philosophical to serve a practical purpose, so we will attempt to clarify these concepts by presenting three types of analyses for illustrative purposes, beginning with a description of how codes are created and used.

Creating and Using Codes

Codes are the currency of content analysis. Researchers use codes to organize and understand their data. Through the coding process, pharmacy educators can systematically and rigorously categorize and interpret vast amounts of text for use in their educational practice or in publication. Codes themselves are short, descriptive labels that symbolically assign a summative or salient attribute to more than one unit of meaning identified in the text. 18 To create codes, a researcher must first become immersed in the data, which typically occurs when a researcher transcribes recorded data or conducts several readings of the text. This process allows the researcher to become familiar with the scope of the data, which spurs nascent ideas about potential concepts or constructs that may exist within it. If studying a phenomenon that has already been described through an existing framework, codes can be created a priori using theoretical frameworks or concepts identified in the literature. If there is no existing framework to apply, codes can emerge during the analytical process. However, emergent codes can also be created as addenda to a priori codes that were identified before the analysis begins if the a priori codes do not sufficiently capture the researcher’s area of interest.

The process of detecting emergent codes begins with identification of units of meaning. While there is no one way to decide what qualifies as a meaning unit, researchers typically define units of meaning differently depending on what kind of analysis is being conducted. As a general rule, when dialogue is being analyzed, such as interviews or focus groups, meaning units are identified as conversational turns, though a code can be as short as one or two words. In written text, such as student reflections or course evaluation data, the researcher must decide if the text should be divided into phrases or sentences, or remain as paragraphs. This decision is usually made based on how many different units of meaning are expressed in a block of text. For example, in a paragraph, if there are several thoughts or concepts being expressed, it is best to break up the paragraph into sentences. If one sentence contains multiple ideas of interest, making it difficult to separate one important thought or behavior from another, then the sentence can be divided into smaller units, such as phrases or sentence fragments. These phrases or sentence fragments are then coded as separate meaning units. Conversely, longer or more complex units of meaning should be condensed into shorter representations that still retain the original meaning in order to reduce the cognitive burden of the analytical process. This could entail removing verbal ticks (eg, “well, uhm…”) from transcribed data or simplifying a compound sentence. Condensation does not ascribe interpretation or implied meaning to a unit, but only shortens a meaning unit as much as possible while preserving the original meaning identified. 18 After condensation, a researcher can proceed to the creation of codes.

Many researchers begin their analyses with several general codes in mind that help guide their focus as defined by their research question, even in instances where the researcher has no a priori model or theory. For example, if a group of instructors are interested in examining recorded videos of their lectures to identify moments of student engagement, they may begin with using generally agreed upon concepts of engagement as codes, such as students “raising their hands,” “taking notes,” and “speaking in class.” However, as the instructors continue to watch their videos, they may notice other behaviors which were not initially anticipated. Perhaps students were seen creating flow charts based on information presented in class. Alternatively, perhaps instructors wanted to include moments when students posed questions to their peers without being prompted. In this case, the instructors would allow the codes of “creating graphic organizers” and “questioning peers” to emerge as additional ways to identify the behavior of student engagement.

Once a researcher has identified condensed units of meaning and labeled them with codes, the codes are then sorted into categories which can help provide more structure to the data. In the above example of recorded lectures, perhaps the category of “verbal behaviors” could be used to group the codes of “speaking in class” and “questioning peers.” For complex analyses, subcategories can also be used to better organize a large amount of codes, but solely at the discretion of the researcher. Two or more categories of codes are then used to identify or support a broader underlying meaning which develops into themes. Themes are most often employed in latent analyses; however, they are appropriate in manifest analyses as well. Themes describe behaviors, experiences, or emotions that occur throughout several categories. 18 Figure 1 illustrates this process. Using the same videotaped lecture example, the instructors might identify two themes of student engagement, “active engagement” and “passive engagement,” where active engagement is supported by the category of “verbal behavior” and also a category that includes the code of “raising their hands” (perhaps something along the lines of “pursuing engagement”), and the theme of “passive engagement” is supported by a category used to organize the behaviors of “taking notes” and “creating graphic organizers.”

An external file that holds a picture, illustration, etc.
Object name is ajpe7113-fig1.jpg

The Process of Qualitative Content Analysis

To more fully demonstrate the process of content analysis and the generation and use of codes, categories, and themes, we present and describe examples of both manifest and latent content analysis. Given that there are multiple ways to create and use codes, our examples illustrate both processes of creating and using a predetermined set of codes. Regardless of the kind of content analysis instructors want to conduct, the initial steps are the same. The instructor must analyze the data using codes as a sense-making process.

Manifest Content Analysis

The first form of analysis, manifest content analysis, examines text for elements that exist on the surface of the text, the meaning of which is taken at face value. Schools and colleges of pharmacy may benefit from conducting manifest content analyses at a programmatic level, including analysis of student evaluations to determine the value of certain courses, or analysis of recruitment materials for addressing issues of cultural humility in a uniform manner. Such uses for manifest content analysis may help administrators make more data-based decisions about students and courses. However, for our example of manifest content analysis, we illustrate the use of content analysis in informing instruction for a single pharmacy educator ( Figure 2 ).

An external file that holds a picture, illustration, etc.
Object name is ajpe7113-fig2.jpg

A Student’s Completed Beta-blocker Case with Codes in Underlined Bold Text

In the example, a pharmacology instructor is trying to assess students’ understanding of three concepts related to the beta-blocker class of drugs: indication of the drug, relevance of family history, and contraindications and precautions. To do so, the instructor asks the students to write a patient case in which beta-blockers are indicated. The instructor gives the students the following prompt: “Reverse-engineer a case in which beta-blockers would be prescribed to the patient. Include a history of the present illness, the patients’ medical, family, and social history, medications, allergies, and relevant lab tests.” Figure 2 is a hypothetical student’s completed assignment, in which they demonstrate their understanding of when and why a beta-blocker would be prescribed.

The student-generated cases are then treated as data and analyzed for the presence of the three previously identified indicators of understanding in order to help the instructor make decisions about where and how to focus future teaching efforts related to this drug class. Codes are created a priori out of the instructor’s interest in analyzing students’ understanding of the concepts related to beta-blocker prescriptions. A codebook ( Table 2 ) is created with the following columns: name of code, code description, and examples of the code. This codebook helps an individual researcher to approach their analysis systematically, but it can also facilitate coding by multiple coders who would apply the same rules outlined in the codebook to the coding process.

Example Code Book Created for Manifest Content Analysis

An external file that holds a picture, illustration, etc.
Object name is ajpe7113-t2.jpg

Using multiple coders introduces complexity to the analysis process, but it is oftentimes the only practical way to analyze large amounts of data. To ensure that all coders are working in tandem, they must establish inter-rater reliability as part of their training process. This process requires that a single form of text be selected, such as one student evaluation. After reviewing the codebook and receiving instruction, everyone on the team individually codes the same piece of data. While calculating percentage agreement has sometimes been used to establish inter-rater reliability, most publication editors require more rigorous statistical analysis (eg, Krippendorf’s alpha, or Cohen’s kappa). 19 Detailed descriptions of these statistics fall outside the scope of this introduction, but it is important to note that the choice depends on the number of coders, the sample size, and the type of data to be analyzed.

Latent Content Analysis

Latent content analysis is another option for pharmacy educators, especially when there are theoretical frameworks or lenses the educator proposes to apply. Such frameworks describe and provide structure to complex concepts and may often be derived from relevant theories. Latent content analysis requires that the researcher is intimately involved in interpreting and finding meaning in the text because meaning is not readily apparent on the surface. 10 To illustrate a latent content analysis using a combination of a priori and emergent codes, we will use the example of a transcribed video excerpt from a student pharmacist interaction with a standardized patient. In this example, the goal is for first-year students to practice talking to a customer about an over-the-counter medication. The case is designed to simulate a customer at a pharmacy counter, who is seeking advice on a medication. The learning objectives for the pharmacist in-training are to assess the customer’s symptoms, determine if the customer can self-treat or if they need to seek out their primary care physician, and then prescribe a medication to alleviate the patient’s symptoms.

To begin, pharmacy educators conducting educational research should first identify what they are looking for in the video transcript. In this case, because the primary outcome for this exercise is aimed at assessing the “soft skills” of student pharmacists, codes are created using the counseling rubric created by Horton and colleagues. 20 Four a priori codes are developed using the literature: empathy, patient-friendly terms, politeness, and positive attitude. However, because the original four codes are inadequate to capture all areas representing the skills the instructor is looking for during the process of analysis, four additional codes are also created: active listening, confidence, follow-up, and patient at ease. Figure 3 presents the video transcript with each of the codes assigned to the meaning units in bolded parentheses.

An external file that holds a picture, illustration, etc.
Object name is ajpe7113-fig3.jpg

A Transcript of a Student’s (JR) Experience with a Standardized Patient (SP) in Which the Codes are Bolded in Parentheses

Following the initial coding using these eight codes, the codes are consolidated to create categories, which are depicted in the taxonomy in Figure 4 . Categories are relationships between codes that represent a higher level of abstraction in the data. 18 To reach conclusions and interpret the fundamental underlying meaning in the data, categories are then organized into themes ( Figure 1 ). Once the data are analyzed, the instructor can assign value to the student’s performance. In this case, the coding process determines that the exercise demonstrated both positive and negative elements of communication and professionalism. Under the category of professionalism, the student generally demonstrated politeness and a positive attitude toward the standardized patient, indicating to the reviewer that the theme of perceived professionalism was apparent during the encounter. However, there were several instances in which confidence and appropriate follow-up were absent. Thus, from a reviewer perspective, the student's performance could be perceived as indicating an opportunity to grow and improve as a future professional. Typically, there are multiple codes in a category and multiple categories in a theme. However, as seen in the example taxonomy, this is not always the case.

An external file that holds a picture, illustration, etc.
Object name is ajpe7113-fig4.jpg

Example of a Latent Content Analysis Taxonomy

If the educator is interested in conducting a latent projective analysis, after identifying the construct of “soft skills,” the researcher allows for each coder to apply their own mental schema as they look for positive and negative indicators of the non-technical skills they believe a student should develop. Mental schema are the cognitive structures that provide organization to knowledge, which in this case allows coders to categorize the data in ways that fit their existing understanding of the construct. The coders will use their own judgement to identify the codes they feel are relevant. The researcher could also choose to apply a theoretical lens to more effectively conceptualize the construct of “soft skills,” such as Rogers' humanism theory, and more specifically, concepts underlying his client-centered therapy. 21 The role of theory in both latent pattern and latent projective analyses is at the discretion of the researcher, and often is determined by what already exists in the literature related to the research question. Though, typically, in latent pattern analyses theory is used for deductive coding, and in latent projective analyses underdeveloped theory is used to first deduce codes and then for induction of the results to strengthen the theory applied. For our example, Rogers describes three salient qualities to develop and maintain a positive client-professional relationship: unconditional positive regard, genuineness, and empathetic understanding. 21 For the third element, specifically, the educator could look for units of meaning that imply empathy and active listening. For our video transcript analysis, this is evident when the student pharmacist demonstrated empathy by responding, "Yeah, I understand," when discussing aggravating factors for the patient's condition. The outcome for both latent pattern and latent projective content analysis is to discover the underlying meaning in a text, such as social rules or mental models. In this example, both pattern and projective approaches can discover interpreted aspects of a student’s abilities and mental models for constructs such as professionalism and empathy. The difference in the approaches is where the precedence lies: in the belief that a pattern is recognizable in the content, or in the mental schema and lived experiences of the coder(s). To better illustrate the differences in the processes of latent pattern and projective content analyses, Figure 5 presents a general outline of each method beginning with the creation of codes and concluding with the generation of themes.

An external file that holds a picture, illustration, etc.
Object name is ajpe7113-fig5.jpg

Flow Chart of the Stages of Latent Pattern and Latent Projective Content Analysis

How to Choose a Methodological Approach to Content Analysis

To determine which approach a researcher should take in their content analysis, two decisions need to be made. First, researchers must determine their goal for the analysis. Second, the researcher must decide where they believe meaning is located. 14 If meaning is located in the discrete elements of the content that are easily identified on the surface of the text, then manifest content analysis is appropriate. If meaning is located deep within the content and the researcher plans to discover context cues and make judgements about implied meaning, then latent content analysis should be applied. When designing the latent content analysis, a researcher then must also identify their focus. If the analysis is intended to identify a recognizable truth within the content by uncovering connections and characteristics that all coders should be able to discover, then latent pattern content analysis is appropriate. If, on the other hand, the researcher will rely heavily on the judgment of the coders and believes that interpretation of the content must leverage the mental schema of the coders to locate deeper meaning, then latent projective content analysis is the best choice.

To demonstrate how a researcher might choose a methodological approach, we have presented a third example of data in Figure 6 . In our two previous examples of content analysis, we used student data. However, faculty data can also be analyzed as part of educational research or for faculty members to improve their own teaching practices. Recall in the video data analyzed using latent content analysis, the student was tasked to identify a suitable over-the-counter medication for a patient complaining of heartburn symptoms. We have extended this example by including an interview with the pharmacy educator supervising the student who was videotaped. The goal of the interview is to evaluate the educator’s ability to assess the student’s performance with the standardized patient. Figure 6 is an excerpt of the interview between the course instructor and an instructional coach. In this conversation, the instructional coach is eliciting evidence to support the faculty member’s views, judgements, and rationale for the educator’s evaluation of the student’s performance.

An external file that holds a picture, illustration, etc.
Object name is ajpe7113-fig6.jpg

A Transcript of an Interview in Which the Interviewer (IN) Questions a Faculty Member (FM) Regarding Their Student’s Standardized Patient Experience

Manifest content analysis would be a valid choice for this data if the researcher was looking to identify evidence of the construct of “instructor priorities” and defined discrete codes that described aspects of performance such as “communication,” “referrals,” or “accurate information.” These codes could be easily identified on the surface of the transcribed interview by identifying keywords related to each code, such as “communicate,” “talk,” and “laugh,” for the code of “communication.” This would allow coders to identify evidence of the concept of “instructor priorities” by sorting through a potentially large amount of text with predetermined targets in mind.

To conduct a latent pattern analysis of this interview, researchers would first immerse themselves in the data to identify a theoretical framework or concepts that represent the area of interest so that coders could discover an emerging truth underneath the surface of the data. After immersion in the data, a researcher might believe it would be interesting to more closely examine the strategies the coach uses to establish rapport with the instructor as a way to better understand models of professional development. These strategies could not be easily identified in the transcripts if read literally, but by looking for connections within the text, codes related to instructional coaching tactics emerge. A latent pattern analysis would require that the researcher code the data in a way that looks for patterns, such as a code of “facilitating reflection,” that could be identified in open-ended questions and other units of meaning where the coder saw evidence of probing techniques, or a code of “establishing rapport” for which a coder could identify nonverbal cues such as “[IN leans forward in chair].”

Conducting latent projective content analysis might be useful if the researcher was interested in using a broader theoretical lens, such as Mezirow’s theory of transformative learning. 22 In this example, the faculty member is understood to have attempted to change a learner’s frame of reference by facilitating cognitive dissonance or a disorienting experience through a standardized patient simulation. To conduct a latent projective analysis, the researcher could analyze the faculty member’s interview using concepts found in this theory. This kind of analysis will help the researcher assess the level of change that the faculty member was able to perceive, or expected to witness, in their attempt to help their pharmacy students improve their interactions with patients. The units of meaning and subsequent codes would rely on the coders to apply their own knowledge of transformative learning because of the absence in the theory of concrete, context-specific behaviors to identify. For this analysis, the researcher would rely on their interpretations of what challenging educational situations look like, what constitutes cognitive dissonance, or what the faculty member is really expecting from his students’ performance. The subsequent analysis could provide evidence to support the use of such standardized patient encounters within the curriculum as a transformative learning experience and would also allow the educator to self-reflect on his ability to assess simulated activities.

OTHER ASPECTS TO CONSIDER

Navigating terminology.

Among the methodological approaches, there are other terms for content analysis that researchers may come across. Hsieh and Shannon 10 proposed three qualitative approaches to content analysis: conventional, directed, and summative. These categories were intended to explain the role of theory in the analysis process. In conventional content analysis, the researcher does not use preconceived categories because existing theory or literature are limited. In directed content analysis, the researcher attempts to further describe a phenomenon already addressed by theory, applying a deductive approach and using identified concepts or codes from exiting research to validate the theory. In summative content analysis, a descriptive approach is taken, identifying and quantifying words or content in order to describe their context. These three categories roughly map to the terms of latent projective, latent pattern, and manifest content analyses respectively, though not precisely enough to suggest that they are synonyms.

Graneheim and colleagues 9 reference the inductive, deductive, and abductive methods of interpretation of content analysis, which are data-driven, concept-driven, and fluid between both data and concepts, respectively. Where manifest content produces phenomenological descriptions most often (but not always) through deductive interpretation, and latent content analysis produces interpretations most often (but not always) through inductive or abductive interpretations. Erlingsson and Brysiewicz 23 refer to content analysis as a continuum, progressing as the researcher develops codes, then categories, and then themes. We present these alternative conceptualizations of content analysis to illustrate that the literature on content analysis, while incredibly useful, presents a multitude of interpretations of the method itself. However, these complexities should not dissuade readers from using content analysis. Identifying what you want to know (ie, your research question) will effectively direct you toward your methodological approach. That said, we have found the most helpful aid in learning content analysis is the application of the methods we have presented.

Ensuring Quality

The standards used to evaluate quantitative research are seldom used in qualitative research. The terms “reliability” and “validity” are typically not used because they reflect the positivist quantitative paradigm. In qualitative research, the preferred term is “trustworthiness,” which is comprised of the concepts of credibility, transferability, dependability, and confirmability, and researchers can take steps in their work to demonstrate that they are trustworthy. 24 Though establishing trustworthiness is outside the scope of this article, novice researchers should be familiar with the necessary steps before publishing their work. This suggestion includes exploration of the concept of saturation, the idea that researchers must demonstrate they have collected and analyzed enough data to warrant their conclusions, which has been a focus of recent debate in qualitative research. 25

There are several threats to the trustworthiness of content analysis in particular. 14 We will use the terms “reliability and validity” to describe these threats, as they are conceptualized this way in the formative literature, and it may be easier for researchers with a quantitative research background to recognize them. Though some of these threats may be particular to the type of data being analyzed, in general, there are risks specific to the different methods of content analysis. In manifest content analysis, reliability is necessary but not sufficient to establish validity. 14 Because there is little judgment required of the coders, lack of high inter-rater agreement among coders will render the data invalid. 14 Additionally, coder fatigue is a common threat to manifest content analysis because the coding is clerical and repetitive in nature.

For latent pattern content analysis, validity and reliability are inversely related. 14 Greater reliability is achieved through more detailed coding rules to improve consistency, but these rules may diminish the accessibility of the coding to consumers of the research. This is defined as low ecological validity. Higher ecological validity is achieved through greater reliance on coder judgment to increase the resonance of the results with the audience, yet this often decreases the inter-rater reliability. In latent projective content analysis, reliability and validity are equivalent. 14 Consistent interpretations among coders both establishes and validates the constructed norm; construction of an accurate norm is evidence of consistency. However, because of this equivalence, issues with low validity or low reliability cannot be isolated. A lack of consistency may result from coding rules, lack of a shared schema, or issues with a defined variable. Reasons for low validity cannot be isolated, but will always result in low consistency.

Any good analysis starts with a codebook and coder training. It is important for all coders to share the mental model of the skill, construct, or phenomenon being coded in the data. However, when conducting latent pattern or projective content analysis in particular, micro-level rules and definitions of codes increase the threat of ecological validity, so it is important to leave enough room in the codebook and during the training to allow for a shared mental schema to emerge in the larger group rather than being strictly directed by the lead researcher. Stability is another threat, which occurs when coders make different judgments as time passes. To reduce this risk, allowing for recoding at a later date can increase the consistency and stability of the codes. Reproducibility is not typically a goal of qualitative research, 15 but for content analysis, codes that are defined both prior to and during analysis should retain their meaning. Researchers can increase the reproducibility of their codebook by creating a detailed audit trail, including descriptions of the methods used to create and define the codes, materials used for the training of the coders, and steps taken to ensure inter-rater reliability.

In all forms of qualitative analysis, coder fatigue is a common threat to trustworthiness, even when the instructor is coding individually. Over time, the cases may start to look the same, making it difficult to refocus and look at each case with fresh eyes. To guard against this, coders should maintain a reflective journal and write analytical memos to help stay focused. Memos might include insights that the researcher has, such as patterns of misunderstanding, areas to focus on when considering re-teaching specific concepts, or specific conversations to have with students. Fatigue can also be mitigated by occasionally talking to participants (eg, meeting with students and listening for their rationale on why they included specific pieces of information in an assignment). These are just examples of potential exercises that can help coders mitigate cognitive fatigue. Most researchers develop their own ways to prevent the fatigue that can seep in after long hours of looking at data. But above all, a sufficient amount of time should be allowed for analysis, so that coders do not feel rushed, and regular breaks should be scheduled and enforced.

Qualitative content analysis is both accessible and high-yield for pharmacy educators and researchers. Though some of the methods may seem abstract or fluid, the nature of qualitative content analysis encompasses these concerns by providing a systematic approach to discover meaning in textual data, both on the surface and implied beneath it. As with most research methods, the surest path towards proficiency is through application and intentional, repeated practice. We encourage pharmacy educators to ask questions suited for qualitative research and to consider the use of content analysis as a qualitative research method for discovering meaning in their data.

Content Analysis

This guide provides an introduction to content analysis, a research methodology that examines words or phrases within a wide range of texts.

  • Introduction to Content Analysis : Read about the history and uses of content analysis.
  • Conceptual Analysis : Read an overview of conceptual analysis and its associated methodology.
  • Relational Analysis : Read an overview of relational analysis and its associated methodology.
  • Commentary : Read about issues of reliability and validity with regard to content analysis as well as the advantages and disadvantages of using content analysis as a research methodology.
  • Examples : View examples of real and hypothetical studies that use content analysis.
  • Annotated Bibliography : Complete list of resources used in this guide and beyond.

An Introduction to Content Analysis

Content analysis is a research tool used to determine the presence of certain words or concepts within texts or sets of texts. Researchers quantify and analyze the presence, meanings and relationships of such words and concepts, then make inferences about the messages within the texts, the writer(s), the audience, and even the culture and time of which these are a part. Texts can be defined broadly as books, book chapters, essays, interviews, discussions, newspaper headlines and articles, historical documents, speeches, conversations, advertising, theater, informal conversation, or really any occurrence of communicative language. Texts in a single study may also represent a variety of different types of occurrences, such as Palmquist's 1990 study of two composition classes, in which he analyzed student and teacher interviews, writing journals, classroom discussions and lectures, and out-of-class interaction sheets. To conduct a content analysis on any such text, the text is coded, or broken down, into manageable categories on a variety of levels--word, word sense, phrase, sentence, or theme--and then examined using one of content analysis' basic methods: conceptual analysis or relational analysis.

A Brief History of Content Analysis

Historically, content analysis was a time consuming process. Analysis was done manually, or slow mainframe computers were used to analyze punch cards containing data punched in by human coders. Single studies could employ thousands of these cards. Human error and time constraints made this method impractical for large texts. However, despite its impracticality, content analysis was already an often utilized research method by the 1940's. Although initially limited to studies that examined texts for the frequency of the occurrence of identified terms (word counts), by the mid-1950's researchers were already starting to consider the need for more sophisticated methods of analysis, focusing on concepts rather than simply words, and on semantic relationships rather than just presence (de Sola Pool 1959). While both traditions still continue today, content analysis now is also utilized to explore mental models, and their linguistic, affective, cognitive, social, cultural and historical significance.

Uses of Content Analysis

Perhaps due to the fact that it can be applied to examine any piece of writing or occurrence of recorded communication, content analysis is currently used in a dizzying array of fields, ranging from marketing and media studies, to literature and rhetoric, ethnography and cultural studies, gender and age issues, sociology and political science, psychology and cognitive science, and many other fields of inquiry. Additionally, content analysis reflects a close relationship with socio- and psycholinguistics, and is playing an integral role in the development of artificial intelligence. The following list (adapted from Berelson, 1952) offers more possibilities for the uses of content analysis:

  • Reveal international differences in communication content
  • Detect the existence of propaganda
  • Identify the intentions, focus or communication trends of an individual, group or institution
  • Describe attitudinal and behavioral responses to communications
  • Determine psychological or emotional state of persons or groups

Types of Content Analysis

In this guide, we discuss two general categories of content analysis: conceptual analysis and relational analysis. Conceptual analysis can be thought of as establishing the existence and frequency of concepts most often represented by words of phrases in a text. For instance, say you have a hunch that your favorite poet often writes about hunger. With conceptual analysis you can determine how many times words such as hunger, hungry, famished, or starving appear in a volume of poems. In contrast, relational analysis goes one step further by examining the relationships among concepts in a text. Returning to the hunger example, with relational analysis, you could identify what other words or phrases hunger or famished appear next to and then determine what different meanings emerge as a result of these groupings.

Conceptual Analysis

Traditionally, content analysis has most often been thought of in terms of conceptual analysis. In conceptual analysis, a concept is chosen for examination, and the analysis involves quantifying and tallying its presence. Also known as thematic analysis [although this term is somewhat problematic, given its varied definitions in current literature--see Palmquist, Carley, & Dale (1997) vis-a-vis Smith (1992)], the focus here is on looking at the occurrence of selected terms within a text or texts, although the terms may be implicit as well as explicit. While explicit terms obviously are easy to identify, coding for implicit terms and deciding their level of implication is complicated by the need to base judgments on a somewhat subjective system. To attempt to limit the subjectivity, then (as well as to limit problems of reliability and validity ), coding such implicit terms usually involves the use of either a specialized dictionary or contextual translation rules. And sometimes, both tools are used--a trend reflected in recent versions of the Harvard and Lasswell dictionaries.

Methods of Conceptual Analysis

Conceptual analysis begins with identifying research questions and choosing a sample or samples. Once chosen, the text must be coded into manageable content categories. The process of coding is basically one of selective reduction . By reducing the text to categories consisting of a word, set of words or phrases, the researcher can focus on, and code for, specific words or patterns that are indicative of the research question.

An example of a conceptual analysis would be to examine several Clinton speeches on health care, made during the 1992 presidential campaign, and code them for the existence of certain words. In looking at these speeches, the research question might involve examining the number of positive words used to describe Clinton's proposed plan, and the number of negative words used to describe the current status of health care in America. The researcher would be interested only in quantifying these words, not in examining how they are related, which is a function of relational analysis. In conceptual analysis, the researcher simply wants to examine presence with respect to his/her research question, i.e. is there a stronger presence of positive or negative words used with respect to proposed or current health care plans, respectively.

Once the research question has been established, the researcher must make his/her coding choices with respect to the eight category coding steps indicated by Carley (1992).

Steps for Conducting Conceptual Analysis

The following discussion of steps that can be followed to code a text or set of texts during conceptual analysis use campaign speeches made by Bill Clinton during the 1992 presidential campaign as an example. To read about each step, click on the items in the list below:

  • Decide the level of analysis.

First, the researcher must decide upon the level of analysis . With the health care speeches, to continue the example, the researcher must decide whether to code for a single word, such as "inexpensive," or for sets of words or phrases, such as "coverage for everyone."

  • Decide how many concepts to code for.

The researcher must now decide how many different concepts to code for. This involves developing a pre-defined or interactive set of concepts and categories. The researcher must decide whether or not to code for every single positive or negative word that appears, or only certain ones that the researcher determines are most relevant to health care. Then, with this pre-defined number set, the researcher has to determine how much flexibility he/she allows him/herself when coding. The question of whether the researcher codes only from this pre-defined set, or allows him/herself to add relevant categories not included in the set as he/she finds them in the text, must be answered. Determining a certain number and set of concepts allows a researcher to examine a text for very specific things, keeping him/her on task. But introducing a level of coding flexibility allows new, important material to be incorporated into the coding process that could have significant bearings on one's results.

  • Decide whether to code for existence or frequency of a concept.

After a certain number and set of concepts are chosen for coding , the researcher must answer a key question: is he/she going to code for existence or frequency ? This is important, because it changes the coding process. When coding for existence, "inexpensive" would only be counted once, no matter how many times it appeared. This would be a very basic coding process and would give the researcher a very limited perspective of the text. However, the number of times "inexpensive" appears in a text might be more indicative of importance. Knowing that "inexpensive" appeared 50 times, for example, compared to 15 appearances of "coverage for everyone," might lead a researcher to interpret that Clinton is trying to sell his health care plan based more on economic benefits, not comprehensive coverage. Knowing that "inexpensive" appeared, but not that it appeared 50 times, would not allow the researcher to make this interpretation, regardless of whether it is valid or not.

  • Decide on how you will distinguish among concepts.

The researcher must next decide on the , i.e. whether concepts are to be coded exactly as they appear, or if they can be recorded as the same even when they appear in different forms. For example, "expensive" might also appear as "expensiveness." The research needs to determine if the two words mean radically different things to him/her, or if they are similar enough that they can be coded as being the same thing, i.e. "expensive words." In line with this, is the need to determine the level of implication one is going to allow. This entails more than subtle differences in tense or spelling, as with "expensive" and "expensiveness." Determining the level of implication would allow the researcher to code not only for the word "expensive," but also for words that imply "expensive." This could perhaps include technical words, jargon, or political euphemism, such as "economically challenging," that the researcher decides does not merit a separate category, but is better represented under the category "expensive," due to its implicit meaning of "expensive."

  • Develop rules for coding your texts.

After taking the generalization of concepts into consideration, a researcher will want to create translation rules that will allow him/her to streamline and organize the coding process so that he/she is coding for exactly what he/she wants to code for. Developing a set of rules helps the researcher insure that he/she is coding things consistently throughout the text, in the same way every time. If a researcher coded "economically challenging" as a separate category from "expensive" in one paragraph, then coded it under the umbrella of "expensive" when it occurred in the next paragraph, his/her data would be invalid. The interpretations drawn from that data will subsequently be invalid as well. Translation rules protect against this and give the coding process a crucial level of consistency and coherence.

  • Decide what to do with "irrelevant" information.

The next choice a researcher must make involves irrelevant information . The researcher must decide whether irrelevant information should be ignored (as Weber, 1990, suggests), or used to reexamine and/or alter the coding scheme. In the case of this example, words like "and" and "the," as they appear by themselves, would be ignored. They add nothing to the quantification of words like "inexpensive" and "expensive" and can be disregarded without impacting the outcome of the coding.

  • Code the texts.

Once these choices about irrelevant information are made, the next step is to code the text. This is done either by hand, i.e. reading through the text and manually writing down concept occurrences, or through the use of various computer programs. Coding with a computer is one of contemporary conceptual analysis' greatest assets. By inputting one's categories, content analysis programs can easily automate the coding process and examine huge amounts of data, and a wider range of texts, quickly and efficiently. But automation is very dependent on the researcher's preparation and category construction. When coding is done manually, a researcher can recognize errors far more easily. A computer is only a tool and can only code based on the information it is given. This problem is most apparent when coding for implicit information, where category preparation is essential for accurate coding.

  • Analyze your results.

Once the coding is done, the researcher examines the data and attempts to draw whatever conclusions and generalizations are possible. Of course, before these can be drawn, the researcher must decide what to do with the information in the text that is not coded. One's options include either deleting or skipping over unwanted material, or viewing all information as relevant and important and using it to reexamine, reassess and perhaps even alter one's coding scheme. Furthermore, given that the conceptual analyst is dealing only with quantitative data, the levels of interpretation and generalizability are very limited. The researcher can only extrapolate as far as the data will allow. But it is possible to see trends, for example, that are indicative of much larger ideas. Using the example from step three, if the concept "inexpensive" appears 50 times, compared to 15 appearances of "coverage for everyone," then the researcher can pretty safely extrapolate that there does appear to be a greater emphasis on the economics of the health care plan, as opposed to its universal coverage for all Americans. It must be kept in mind that conceptual analysis, while extremely useful and effective for providing this type of information when done right, is limited by its focus and the quantitative nature of its examination. To more fully explore the relationships that exist between these concepts, one must turn to relational analysis.

Relational Analysis

Relational analysis, like conceptual analysis, begins with the act of identifying concepts present in a given text or set of texts. However, relational analysis seeks to go beyond presence by exploring the relationships between the concepts identified. Relational analysis has also been termed semantic analysis (Palmquist, Carley, & Dale, 1997). In other words, the focus of relational analysis is to look for semantic, or meaningful, relationships. Individual concepts, in and of themselves, are viewed as having no inherent meaning. Rather, meaning is a product of the relationships among concepts in a text. Carley (1992) asserts that concepts are "ideational kernels;" these kernels can be thought of as symbols which acquire meaning through their connections to other symbols.

Theoretical Influences on Relational Analysis

The kind of analysis that researchers employ will vary significantly according to their theoretical approach. Key theoretical approaches that inform content analysis include linguistics and cognitive science.

Linguistic approaches to content analysis focus analysis of texts on the level of a linguistic unit, typically single clause units. One example of this type of research is Gottschalk (1975), who developed an automated procedure which analyzes each clause in a text and assigns it a numerical score based on several emotional/psychological scales. Another technique is to code a text grammatically into clauses and parts of speech to establish a matrix representation (Carley, 1990).

Approaches that derive from cognitive science include the creation of decision maps and mental models. Decision maps attempt to represent the relationship(s) between ideas, beliefs, attitudes, and information available to an author when making a decision within a text. These relationships can be represented as logical, inferential, causal, sequential, and mathematical relationships. Typically, two of these links are compared in a single study, and are analyzed as networks. For example, Heise (1987) used logical and sequential links to examine symbolic interaction. This methodology is thought of as a more generalized cognitive mapping technique, rather than the more specific mental models approach.

Mental models are groups or networks of interrelated concepts that are thought to reflect conscious or subconscious perceptions of reality. According to cognitive scientists, internal mental structures are created as people draw inferences and gather information about the world. Mental models are a more specific approach to mapping because beyond extraction and comparison because they can be numerically and graphically analyzed. Such models rely heavily on the use of computers to help analyze and construct mapping representations. Typically, studies based on this approach follow five general steps:

  • Identifing concepts
  • Defining relationship types
  • Coding the text on the basis of 1 and 2
  • Coding the statements
  • Graphically displaying and numerically analyzing the resulting maps

To create the model, a researcher converts a text into a map of concepts and relations; the map is then analyzed on the level of concepts and statements, where a statement consists of two concepts and their relationship. Carley (1990) asserts that this makes possible the comparison of a wide variety of maps, representing multiple sources, implicit and explicit information, as well as socially shared cognitions.

Relational Analysis: Overview of Methods

As with other sorts of inquiry, initial choices with regard to what is being studied and/or coded for often determine the possibilities of that particular study. For relational analysis, it is important to first decide which concept type(s) will be explored in the analysis. Studies have been conducted with as few as one and as many as 500 concept categories. Obviously, too many categories may obscure your results and too few can lead to unreliable and potentially invalid conclusions. Therefore, it is important to allow the context and necessities of your research to guide your coding procedures.

The steps to relational analysis that we consider in this guide suggest some of the possible avenues available to a researcher doing content analysis. We provide an example to make the process easier to grasp. However, the choices made within the context of the example are but only a few of many possibilities. The diversity of techniques available suggests that there is quite a bit of enthusiasm for this mode of research. Once a procedure is rigorously tested, it can be applied and compared across populations over time. The process of relational analysis has achieved a high degree of computer automation but still is, like most forms of research, time consuming. Perhaps the strongest claim that can be made is that it maintains a high degree of statistical rigor without losing the richness of detail apparent in even more qualitative methods.

Three Subcategories of Relational Analysis

Affect extraction: This approach provides an emotional evaluation of concepts explicit in a text. It is problematic because emotion may vary across time and populations. Nevertheless, when extended it can be a potent means of exploring the emotional/psychological state of the speaker and/or writer. Gottschalk (1995) provides an example of this type of analysis. By assigning concepts identified a numeric value on corresponding emotional/psychological scales that can then be statistically examined, Gottschalk claims that the emotional/psychological state of the speaker or writer can be ascertained via their verbal behavior.

Proximity analysis: This approach, on the other hand, is concerned with the co-occurrence of explicit concepts in the text. In this procedure, the text is defined as a string of words. A given length of words, called a window , is determined. The window is then scanned across a text to check for the co-occurrence of concepts. The result is the creation of a concept determined by the concept matrix . In other words, a matrix, or a group of interrelated, co-occurring concepts, might suggest a certain overall meaning. The technique is problematic because the window records only explicit concepts and treats meaning as proximal co-occurrence. Other techniques such as clustering, grouping, and scaling are also useful in proximity analysis.

Cognitive mapping: This approach is one that allows for further analysis of the results from the two previous approaches. It attempts to take the above processes one step further by representing these relationships visually for comparison. Whereas affective and proximal analysis function primarily within the preserved order of the text, cognitive mapping attempts to create a model of the overall meaning of the text. This can be represented as a graphic map that represents the relationships between concepts.

In this manner, cognitive mapping lends itself to the comparison of semantic connections across texts. This is known as map analysis which allows for comparisons to explore "how meanings and definitions shift across people and time" (Palmquist, Carley, & Dale, 1997). Maps can depict a variety of different mental models (such as that of the text, the writer/speaker, or the social group/period), according to the focus of the researcher. This variety is indicative of the theoretical assumptions that support mapping: mental models are representations of interrelated concepts that reflect conscious or subconscious perceptions of reality; language is the key to understanding these models; and these models can be represented as networks (Carley, 1990). Given these assumptions, it's not surprising to see how closely this technique reflects the cognitive concerns of socio-and psycholinguistics, and lends itself to the development of artificial intelligence models.

Steps for Conducting Relational Analysis

The following discussion of the steps (or, perhaps more accurately, strategies) that can be followed to code a text or set of texts during relational analysis. These explanations are accompanied by examples of relational analysis possibilities for statements made by Bill Clinton during the 1998 hearings.

  • Identify the Question.

The question is important because it indicates where you are headed and why. Without a focused question, the concept types and options open to interpretation are limitless and therefore the analysis difficult to complete. Possibilities for the Hairy Hearings of 1998 might be:

What did Bill Clinton say in the speech? OR What concrete information did he present to the public?
  • Choose a sample or samples for analysis.

Once the question has been identified, the researcher must select sections of text/speech from the hearings in which Bill Clinton may have not told the entire truth or is obviously holding back information. For relational content analysis, the primary consideration is how much information to preserve for analysis. One must be careful not to limit the results by doing so, but the researcher must also take special care not to take on so much that the coding process becomes too heavy and extensive to supply worthwhile results.

  • Determine the type of analysis.

Once the sample has been chosen for analysis, it is necessary to determine what type or types of relationships you would like to examine. There are different subcategories of relational analysis that can be used to examine the relationships in texts.

In this example, we will use proximity analysis because it is concerned with the co-occurrence of explicit concepts in the text. In this instance, we are not particularly interested in affect extraction because we are trying to get to the hard facts of what exactly was said rather than determining the emotional considerations of speaker and receivers surrounding the speech which may be unrecoverable.

Once the subcategory of analysis is chosen, the selected text must be reviewed to determine the level of analysis. The researcher must decide whether to code for a single word, such as "perhaps," or for sets of words or phrases like "I may have forgotten."

  • Reduce the text to categories and code for words or patterns.

At the simplest level, a researcher can code merely for existence. This is not to say that simplicity of procedure leads to simplistic results. Many studies have successfully employed this strategy. For example, Palmquist (1990) did not attempt to establish the relationships among concept terms in the classrooms he studied; his study did, however, look at the change in the presence of concepts over the course of the semester, comparing a map analysis from the beginning of the semester to one constructed at the end. On the other hand, the requirement of one's specific research question may necessitate deeper levels of coding to preserve greater detail for analysis.

In relation to our extended example, the researcher might code for how often Bill Clinton used words that were ambiguous, held double meanings, or left an opening for change or "re-evaluation." The researcher might also choose to code for what words he used that have such an ambiguous nature in relation to the importance of the information directly related to those words.

  • Explore the relationships between concepts (Strength, Sign & Direction).

Once words are coded, the text can be analyzed for the relationships among the concepts set forth. There are three concepts which play a central role in exploring the relations among concepts in content analysis.

  • Strength of Relationship: Refers to the degree to which two or more concepts are related. These relationships are easiest to analyze, compare, and graph when all relationships between concepts are considered to be equal. However, assigning strength to relationships retains a greater degree of the detail found in the original text. Identifying strength of a relationship is key when determining whether or not words like unless, perhaps, or maybe are related to a particular section of text, phrase, or idea.
  • Sign of a Relationship: Refers to whether or not the concepts are positively or negatively related. To illustrate, the concept "bear" is negatively related to the concept "stock market" in the same sense as the concept "bull" is positively related. Thus "it's a bear market" could be coded to show a negative relationship between "bear" and "market". Another approach to coding for strength entails the creation of separate categories for binary oppositions. The above example emphasizes "bull" as the negation of "bear," but could be coded as being two separate categories, one positive and one negative. There has been little research to determine the benefits and liabilities of these differing strategies. Use of Sign coding for relationships in regard to the hearings my be to find out whether or not the words under observation or in question were used adversely or in favor of the concepts (this is tricky, but important to establishing meaning).
  • Direction of the Relationship: Refers to the type of relationship categories exhibit. Coding for this sort of information can be useful in establishing, for example, the impact of new information in a decision making process. Various types of directional relationships include, "X implies Y," "X occurs before Y" and "if X then Y," or quite simply the decision whether concept X is the "prime mover" of Y or vice versa. In the case of the 1998 hearings, the researcher might note that, "maybe implies doubt," "perhaps occurs before statements of clarification," and "if possibly exists, then there is room for Clinton to change his stance." In some cases, concepts can be said to be bi-directional, or having equal influence. This is equivalent to ignoring directionality. Both approaches are useful, but differ in focus. Coding all categories as bi-directional is most useful for exploratory studies where pre-coding may influence results, and is also most easily automated, or computer coded.
  • Code the relationships.

One of the main differences between conceptual analysis and relational analysis is that the statements or relationships between concepts are coded. At this point, to continue our extended example, it is important to take special care with assigning value to the relationships in an effort to determine whether the ambiguous words in Bill Clinton's speech are just fillers, or hold information about the statements he is making.

  • Perform Statisical Analyses.

This step involves conducting statistical analyses of the data you've coded during your relational analysis. This may involve exploring for differences or looking for relationships among the variables you've identified in your study.

  • Map out the Representations.

In addition to statistical analysis, relational analysis often leads to viewing the representations of the concepts and their associations in a text (or across texts) in a graphical -- or map -- form. Relational analysis is also informed by a variety of different theoretical approaches: linguistic content analysis, decision mapping, and mental models.

The authors of this guide have created the following commentaries on content analysis.

Issues of Reliability & Validity

The issues of reliability and validity are concurrent with those addressed in other research methods. The reliability of a content analysis study refers to its stability , or the tendency for coders to consistently re-code the same data in the same way over a period of time; reproducibility , or the tendency for a group of coders to classify categories membership in the same way; and accuracy , or the extent to which the classification of a text corresponds to a standard or norm statistically. Gottschalk (1995) points out that the issue of reliability may be further complicated by the inescapably human nature of researchers. For this reason, he suggests that coding errors can only be minimized, and not eliminated (he shoots for 80% as an acceptable margin for reliability).

On the other hand, the validity of a content analysis study refers to the correspondence of the categories to the conclusions , and the generalizability of results to a theory.

The validity of categories in implicit concept analysis, in particular, is achieved by utilizing multiple classifiers to arrive at an agreed upon definition of the category. For example, a content analysis study might measure the occurrence of the concept category "communist" in presidential inaugural speeches. Using multiple classifiers, the concept category can be broadened to include synonyms such as "red," "Soviet threat," "pinkos," "godless infidels" and "Marxist sympathizers." "Communist" is held to be the explicit variable, while "red," etc. are the implicit variables.

The overarching problem of concept analysis research is the challenge-able nature of conclusions reached by its inferential procedures. The question lies in what level of implication is allowable, i.e. do the conclusions follow from the data or are they explainable due to some other phenomenon? For occurrence-specific studies, for example, can the second occurrence of a word carry equal weight as the ninety-ninth? Reasonable conclusions can be drawn from substantive amounts of quantitative data, but the question of proof may still remain unanswered.

This problem is again best illustrated when one uses computer programs to conduct word counts. The problem of distinguishing between synonyms and homonyms can completely throw off one's results, invalidating any conclusions one infers from the results. The word "mine," for example, variously denotes a personal pronoun, an explosive device, and a deep hole in the ground from which ore is extracted. One may obtain an accurate count of that word's occurrence and frequency, but not have an accurate accounting of the meaning inherent in each particular usage. For example, one may find 50 occurrences of the word "mine." But, if one is only looking specifically for "mine" as an explosive device, and 17 of the occurrences are actually personal pronouns, the resulting 50 is an inaccurate result. Any conclusions drawn as a result of that number would render that conclusion invalid.

The generalizability of one's conclusions, then, is very dependent on how one determines concept categories, as well as on how reliable those categories are. It is imperative that one defines categories that accurately measure the idea and/or items one is seeking to measure. Akin to this is the construction of rules. Developing rules that allow one, and others, to categorize and code the same data in the same way over a period of time, referred to as stability , is essential to the success of a conceptual analysis. Reproducibility , not only of specific categories, but of general methods applied to establishing all sets of categories, makes a study, and its subsequent conclusions and results, more sound. A study which does this, i.e. in which the classification of a text corresponds to a standard or norm, is said to have accuracy .

Advantages of Content Analysis

Content analysis offers several advantages to researchers who consider using it. In particular, content analysis:

  • looks directly at communication via texts or transcripts, and hence gets at the central aspect of social interaction
  • can allow for both quantitative and qualitative operations
  • can provides valuable historical/cultural insights over time through analysis of texts
  • allows a closeness to text which can alternate between specific categories and relationships and also statistically analyzes the coded form of the text
  • can be used to interpret texts for purposes such as the development of expert systems (since knowledge and rules can both be coded in terms of explicit statements about the relationships among concepts)
  • is an unobtrusive means of analyzing interactions
  • provides insight into complex models of human thought and language use

Disadvantages of Content Analysis

Content analysis suffers from several disadvantages, both theoretical and procedural. In particular, content analysis:

  • can be extremely time consuming
  • is subject to increased error, particularly when relational analysis is used to attain a higher level of interpretation
  • is often devoid of theoretical base, or attempts too liberally to draw meaningful inferences about the relationships and impacts implied in a study
  • is inherently reductive, particularly when dealing with complex texts
  • tends too often to simply consist of word counts
  • often disregards the context that produced the text, as well as the state of things after the text is produced
  • can be difficult to automate or computerize

The Palmquist, Carley and Dale study, a summary of "Applications of Computer-Aided Text Analysis: Analyzing Literary and Non-Literary Texts" (1997) is an example of two studies that have been conducted using both conceptual and relational analysis. The Problematic Text for Content Analysis shows the differences in results obtained by a conceptual and a relational approach to a study.

Related Information: Example of a Problematic Text for Content Analysis

In this example, both students observed a scientist and were asked to write about the experience.

Student A: I found that scientists engage in research in order to make discoveries and generate new ideas. Such research by scientists is hard work and often involves collaboration with other scientists which leads to discoveries which make the scientists famous. Such collaboration may be informal, such as when they share new ideas over lunch, or formal, such as when they are co-authors of a paper.
Student B: It was hard work to research famous scientists engaged in collaboration and I made many informal discoveries. My research showed that scientists engaged in collaboration with other scientists are co-authors of at least one paper containing their new ideas. Some scientists make formal discoveries and have new ideas.

Content analysis coding for explicit concepts may not reveal any significant differences. For example, the existence of "I, scientist, research, hard work, collaboration, discoveries, new ideas, etc..." are explicit in both texts, occur the same number of times, and have the same emphasis. Relational analysis or cognitive mapping, however, reveals that while all concepts in the text are shared, only five concepts are common to both. Analyzing these statements reveals that Student A reports on what "I" found out about "scientists," and elaborated the notion of "scientists" doing "research." Student B focuses on what "I's" research was and sees scientists as "making discoveries" without emphasis on research.

Related Information: The Palmquist, Carley and Dale Study

Consider these two questions: How has the depiction of robots changed over more than a century's worth of writing? And, do students and writing instructors share the same terms for describing the writing process? Although these questions seem totally unrelated, they do share a commonality: in the Palmquist, Carley & Dale study, their answers rely on computer-aided text analysis to demonstrate how different texts can be analyzed.

Literary texts

One half of the study explored the depiction of robots in 27 science fiction texts written between 1818 and 1988. After texts were divided into three historically defined groups, readers look for how the depiction of robots has changed over time. To do this, researchers had to create concept lists and relationship types, create maps using a computer software (see Fig. 1), modify those maps and then ultimately analyze them. The final product of the analysis revealed that over time authors were less likely to depict robots as metallic humanoids.

Non-literary texts

The second half of the study used student journals and interviews, teacher interviews, texts books, and classroom observations as the non-literary texts from which concepts and words were taken. The purpose behind the study was to determine if, in fact, over time teacher and students would begin to share a similar vocabulary about the writing process. Again, researchers used computer software to assist in the process. This time, computers helped researchers generated a concept list based on frequently occurring words and phrases from all texts. Maps were also created and analyzed in this study (see Fig. 2).

Annotated Bibliography

Resources On How To Conduct Content Analysis

Beard, J., & Yaprak, A. (1989). Language implications for advertising in international markets: A model for message content and message execution. A paper presented at the 8th International Conference on Language Communication for World Business and the Professions. Ann Arbor, MI.

This report discusses the development and testing of a content analysis model for assessing advertising themes and messages aimed primarily at U.S. markets which seeks to overcome barriers in the cultural environment of international markets. Texts were categorized under 3 headings: rational, emotional, and moral. The goal here was to teach students to appreciate differences in language and culture.

Berelson, B. (1971). Content analysis in communication research . New York: Hafner Publishing Company.

While this book provides an extensive outline of the uses of content analysis, it is far more concerned with conveying a critical approach to current literature on the subject. In this respect, it assumes a bit of prior knowledge, but is still accessible through the use of concrete examples.

Budd, R. W., Thorp, R.K., & Donohew, L. (1967). Content analysis of communications . New York: Macmillan Company.

Although published in 1967, the decision of the authors to focus on recent trends in content analysis keeps their insights relevant even to modern audiences. The book focuses on specific uses and methods of content analysis with an emphasis on its potential for researching human behavior. It is also geared toward the beginning researcher and breaks down the process of designing a content analysis study into 6 steps that are outlined in successive chapters. A useful annotated bibliography is included.

Carley, K. (1992). Coding choices for textual analysis: A comparison of content analysis and map analysis. Unpublished Working Paper.

Comparison of the coding choices necessary to conceptual analysis and relational analysis, especially focusing on cognitive maps. Discusses concept coding rules needed for sufficient reliability and validity in a Content Analysis study. In addition, several pitfalls common to texts are discussed.

Carley, K. (1990). Content analysis. In R.E. Asher (Ed.), The Encyclopedia of Language and Linguistics. Edinburgh: Pergamon Press.

Quick, yet detailed, overview of the different methodological kinds of Content Analysis. Carley breaks down her paper into five sections, including: Conceptual Analysis, Procedural Analysis, Relational Analysis, Emotional Analysis and Discussion. Also included is an excellent and comprehensive Content Analysis reference list.

Carley, K. (1989). Computer analysis of qualitative data . Pittsburgh, PA: Carnegie Mellon University.

Presents graphic, illustrated representations of computer based approaches to content analysis.

Carley, K. (1992). MECA . Pittsburgh, PA: Carnegie Mellon University.

A resource guide explaining the fifteen routines that compose the Map Extraction Comparison and Analysis (MECA) software program. Lists the source file, input and out files, and the purpose for each routine.

Carney, T. F. (1972). Content analysis: A technique for systematic inference from communications . Winnipeg, Canada: University of Manitoba Press.

This book introduces and explains in detail the concept and practice of content analysis. Carney defines it; traces its history; discusses how content analysis works and its strengths and weaknesses; and explains through examples and illustrations how one goes about doing a content analysis.

de Sola Pool, I. (1959). Trends in content analysis . Urbana, Ill: University of Illinois Press.

The 1959 collection of papers begins by differentiating quantitative and qualitative approaches to content analysis, and then details facets of its uses in a wide variety of disciplines: from linguistics and folklore to biography and history. Includes a discussion on the selection of relevant methods and representational models.

Duncan, D. F. (1989). Content analysis in health educaton research: An introduction to purposes and methods. Heatlth Education, 20 (7).

This article proposes using content analysis as a research technique in health education. A review of literature relating to applications of this technique and a procedure for content analysis are presented.

Gottschalk, L. A. (1995). Content analysis of verbal behavior: New findings and clinical applications. Hillside, NJ: Lawrence Erlbaum Associates, Inc.

This book primarily focuses on the Gottschalk-Gleser method of content analysis, and its application as a method of measuring psychological dimensions of children and adults via the content and form analysis of their verbal behavior, using the grammatical clause as the basic unit of communication for carrying semantic messages generated by speakers or writers.

Krippendorf, K. (1980). Content analysis: An introduction to its methodology Beverly Hills, CA: Sage Publications.

This is one of the most widely quoted resources in many of the current studies of Content Analysis. Recommended as another good, basic resource, as Krippendorf presents the major issues of Content Analysis in much the same way as Weber (1975).

Moeller, L. G. (1963). An introduction to content analysis--including annotated bibliography . Iowa City: University of Iowa Press.

A good reference for basic content analysis. Discusses the options of sampling, categories, direction, measurement, and the problems of reliability and validity in setting up a content analysis. Perhaps better as a historical text due to its age.

Smith, C. P. (Ed.). (1992). Motivation and personality: Handbook of thematic content analysis. New York: Cambridge University Press.

Billed by its authors as "the first book to be devoted primarily to content analysis systems for assessment of the characteristics of individuals, groups, or historical periods from their verbal materials." The text includes manuals for using various systems, theory, and research regarding the background of systems, as well as practice materials, making the book both a reference and a handbook.

Solomon, M. (1993). Content analysis: a potent tool in the searcher's arsenal. Database, 16 (2), 62-67.

Online databases can be used to analyze data, as well as to simply retrieve it. Online-media-source content analysis represents a potent but little-used tool for the business searcher. Content analysis benchmarks useful to advertisers include prominence, offspin, sponsor affiliation, verbatims, word play, positioning and notational visibility.

Weber, R. P. (1990). Basic content analysis, second edition . Newbury Park, CA: Sage Publications.

Good introduction to Content Analysis. The first chapter presents a quick overview of Content Analysis. The second chapter discusses content classification and interpretation, including sections on reliability, validity, and the creation of coding schemes and categories. Chapter three discusses techniques of Content Analysis, using a number of tables and graphs to illustrate the techniques. Chapter four examines issues in Content Analysis, such as measurement, indication, representation and interpretation.

Examples of Content Analysis

Adams, W., & Shriebman, F. (1978). Television network news: Issues in content research . Washington, DC: George Washington University Press.

A fairly comprehensive application of content analysis to the field of television news reporting. The books tripartite division discusses current trends and problems with news criticism from a content analysis perspective, four different content analysis studies of news media, and makes recommendations for future research in the area. Worth a look by anyone interested in mass communication research.

Auter, P. J., & Moore, R. L. (1993). Buying from a friend: a content analysis of two teleshopping programs. Journalism Quarterly, 70 (2), 425-437.

A preliminary study was conducted to content-analyze random samples of two teleshopping programs, using a measure of content interactivity and a locus of control message index.

Barker, S. P. (???) Fame: A content analysis study of the American film biography. Ohio State University. Thesis.

Barker examined thirty Oscar-nominated films dating from 1929 to 1979 using O.J. Harvey Belief System and the Kohlberg's Moral Stages to determine whether cinema heroes were positive role models for fame and success or morally ambiguous celebrities. Content analysis was successful in determining several trends relative to the frequency and portrayal of women in film, the generally high ethical character of the protagonists, and the dogmatic, close-minded nature of film antagonists.

Bernstein, J. M. & Lacy, S. (1992). Contextual coverage of government by local television news. Journalism Quarterly, 69 (2), 329-341.

This content analysis of 14 local television news operations in five markets looks at how local TV news shows contribute to the marketplace of ideas. Performance was measured as the allocation of stories to types of coverage that provide the context about events and issues confronting the public.

Blaikie, A. (1993). Images of age: a reflexive process. Applied Ergonomics, 24 (1), 51-58.

Content analysis of magazines provides a sharp instrument for reflecting the change in stereotypes of aging over past decades.

Craig, R. S. (1992). The effect of day part on gender portrayals in television commercials: a content analysis. Sex Roles: A Journal of Research, 26 (5-6), 197-213.

Gender portrayals in 2,209 network television commercials were content analyzed. To compare differences between three day parts, the sample was chosen from three time periods: daytime, evening prime time, and weekend afternoon sportscasts. The results indicate large and consistent differences in the way men and women are portrayed in these three day parts, with almost all comparisons reaching significance at the .05 level. Although ads in all day parts tended to portray men in stereotypical roles of authority and dominance, those on weekends tended to emphasize escape form home and family. The findings of earlier studies which did not consider day part differences may now have to be reevaluated.

Dillon, D. R. et al. (1992). Article content and authorship trends in The Reading Teacher, 1948-1991. The Reading Teacher, 45 (5), 362-368.

The authors explore changes in the focus of the journal over time.

Eberhardt, EA. (1991). The rhetorical analysis of three journal articles: The study of form, content, and ideology. Ft. Collins, CO: Colorado State University.

Eberhardt uses content analysis in this thesis paper to analyze three journal articles that reported on President Ronald Reagan's address in which he responded to the Tower Commission report concerning the IranContra Affair. The reports concentrated on three rhetorical elements: idea generation or content; linguistic style or choice of language; and the potential societal effect of both, which Eberhardt analyzes, along with the particular ideological orientation espoused by each magazine.

Ellis, B. G. & Dick, S. J. (1996). 'Who was 'Shadow'? The computer knows: applying grammar-program statistics in content analyses to solve mysteries about authorship. Journalism & Mass Communication Quarterly, 73 (4), 947-963.

This study's objective was to employ the statistics-documentation portion of a word-processing program's grammar-check feature as a final, definitive, and objective tool for content analyses - used in tandem with qualitative analyses - to determine authorship. Investigators concluded there was significant evidence from both modalities to support their theory that Henry Watterson, long-time editor of the Louisville Courier-Journal, probably was the South's famed Civil War correspondent "Shadow" and to rule out another prime suspect, John H. Linebaugh of the Memphis Daily Appeal. Until now, this Civil War mystery has never been conclusively solved, puzzling historians specializing in Confederate journalism.

Gottschalk, L. A., Stein, M. K. & Shapiro, D.H. (1997). The application of computerized content analysis in a psychiatric outpatient clinic. Journal of Clinical Psychology, 53 (5) , 427-442.

Twenty-five new psychiatric outpatients were clinically evaluated and were administered a brief psychological screening battery which included measurements of symptoms, personality, and cognitive function. Included in this assessment procedure were the Gottschalk-Gleser Content Analysis Scales on which scores were derived from five minute speech samples by means of an artificial intelligence-based computer program. The use of this computerized content analysis procedure for initial, rapid diagnostic neuropsychiatric appraisal is supported by this research.

Graham, J. L., Kamins, M. A., & Oetomo, D. S. (1993). Content analysis of German and Japanese advertising in print media from Indonesia, Spain, and the United States. Journal of Advertising , 22 (2), 5-16.

The authors analyze informational and emotional content in print advertisements in order to consider how home-country culture influences firms' marketing strategies and tactics in foreign markets. Research results provided evidence contrary to the original hypothesis that home-country culture would influence ads in each of the target countries.

Herzog, A. (1973). The B.S. Factor: The theory and technique of faking it in America . New York: Simon and Schuster.

Herzog takes a look at the rhetoric of American culture using content analysis to point out discrepancies between intention and reality in American society. The study reveals, albeit in a comedic tone, how double talk and "not quite lies" are pervasive in our culture.

Horton, N. S. (1986). Young adult literature and censorship: A content analysis of seventy-eight young adult books . Denton, TX: North Texas State University.

The purpose of Horton's content analysis was to analyze a representative seventy-eight current young adult books to determine the extent to which they contain items which are objectionable to would-be censors. Seventy-eight books were identified which fit the criteria of popularity and literary quality. Each book was analyzed for, and tallied for occurrence of, six categories, including profanity, sex, violence, parent conflict, drugs and condoned bad behavior.

Isaacs, J. S. (1984). A verbal content analysis of the early memories of psychiatric patients . Berkeley: California School of Professional Psychology.

Isaacs did a content analysis investigation on the relationship between words and phrases used in early memories and clinical diagnosis. His hypothesis was that in conveying their early memories schizophrenic patients tend to use an identifiable set of words and phrases more frequently than do nonpatients and that schizophrenic patients use these words and phrases more frequently than do patients with major affective disorders.

Jean Lee, S. K. & Hwee Hoon, T. (1993). Rhetorical vision of men and women managers in Singapore. Human Relations, 46 (4), 527-542.

A comparison of media portrayal of male and female managers' rhetorical vision in Singapore is made. Content analysis of newspaper articles used to make this comparison also reveals the inherent conflicts that women managers have to face. Purposive and multi-stage sampling of articles are utilized.

Kaur-Kasior, S. (1987). The treatment of culture in greeting cards: A content analysis . Bowling Green, OH: Bowling Green State University.

Using six historical periods dating from 1870 to 1987, this content analysis study attempted to determine what structural/cultural aspects of American society were reflected in greeting cards. The study determined that the size of cards increased over time, included more pages, and had animals and flowers as their most dominant symbols. In addition, white was the most common color used. Due to habituation and specialization, says the author, greeting cards have become institutionalized in American culture.

Koza, J. E. (1992). The missing males and other gender-related issues in music education: A critical analysis of evidence from the Music Supervisor's Journal, 1914-1924. Paper presented at the annual meeting of the American Educational Research Association. San Francisco.

The goal of this study was to identify all educational issues that would today be explicitly gender related and to analyze the explanations past music educators gave for the existence of gender-related problems. A content analysis of every gender-related reference was undertaken, finding that the current preoccupation with males in music education has a long history and that little has changed since the early part of this century.

Laccinole, M. D. (1982). Aging and married couples: A language content analysis of a conversational and expository speech task . Eugene, OR: University of Oregon.

Using content analysis, this paper investigated the relationship of age to the use of the grammatical categories, and described the differences in the usage of these grammatical categories in a conversation and expository speech task by fifty married couples. The subjects Laccinole used in his analysis were Caucasian, English speaking, middle class, ranged in ages from 20 to 83 years of age, were in good health and had no history of communication disorders.
Laffal, J. (1995). A concept analysis of Jonathan Swift's 'A Tale of a Tub' and 'Gulliver's Travels.' Computers and Humanities, 29 (5), 339-362.
In this study, comparisons of concept profiles of "Tub," "Gulliver," and Swift's own contemporary texts, as well as a composite text of 18th century writers, reveal that "Gulliver" is conceptually different from "Tub." The study also discovers that the concepts and words of these texts suggest two strands in Swift's thinking.

Lewis, S. M. (1991). Regulation from a deregulatory FCC: Avoiding discursive dissonance. Masters Thesis, Fort Collins, CO: Colorado State University.

This thesis uses content analysis to examine inconsistent statements made by the Federal Communications Commission (FCC) in its policy documents during the 1980s. Lewis analyzes positions set forth by the FCC in its policy statements and catalogues different strategies that can be used by speakers to be or to appear consistent, as well as strategies to avoid inconsistent speech or discursive dissonance.

Norton, T. L. (1987). The changing image of childhood: A content analysis of Caldecott Award books. Los Angeles: University of South Carolina.

Content analysis was conducted on 48 Caldecott Medal Recipient books dating from 1938 to 1985 to determine whether the reflect the idea that the social perception of childhood has altered since the early 1960's. The results revealed an increasing "loss of childhood innocence," as well as a general sentimentality for childhood pervasive in the texts. Suggests further study of children's literature to confirm the validity of such study.

O'Dell, J. W. & Weideman, D. (1993). Computer content analysis of the Schreber case. Journal of Clinical Psychology, 49 (1), 120-125.

An example of the application of content analysis as a means of recreating a mental model of the psychology of an individual.

Pratt, C. A. & Pratt, C. B. (1995). Comparative content analysis of food and nutrition advertisements in Ebony, Essence, and Ladies' Home Journal. Journal of Nutrition Education, 27 (1), 11-18.

This study used content analysis to measure the frequencies and forms of food, beverage, and nutrition advertisements and their associated health-promotional message in three U.S. consumer magazines during two 3-year periods: 1980-1982 and 1990-1992. The study showed statistically significant differences among the three magazines in both frequencies and types of major promotional messages in the advertisements. Differences between the advertisements in Ebony and Essence, the readerships of which were primarily African-American, and those found in Ladies Home Journal were noted, as were changes in the two time periods. Interesting tie in to ethnographic research studies?
Riffe, D., Lacy, S., & Drager, M. W. (1996). Sample size in content analysis of weekly news magazines. Journalism & Mass Communication Quarterly,73 (3), 635-645.
This study explores a variety of approaches to deciding sample size in analyzing magazine content. Having tested random samples of size six, eight, ten, twelve, fourteen, and sixteen issues, the authors show that a monthly stratified sample of twelve issues is the most efficient method for inferring to a year's issues.

Roberts, S. K. (1987). A content analysis of how male and female protagonists in Newbery Medal and Honor books overcome conflict: Incorporating a locus of control framework. Fayetteville, AR: University of Arkansas.

The purpose of this content analysis was to analyze Newbery Medal and Honor books in order to determine how male and female protagonists were assigned behavioral traits in overcoming conflict as it relates to an internal or external locus of control schema. Roberts used all, instead of just a sample, of the fictional Newbery Medal and Honor books which met his study's criteria. A total of 120 male and female protagonists were categorized, from Newbery books dating from 1922 to 1986.

Schneider, J. (1993). Square One TV content analysis: Final report . New York: Children's Television Workshop.

This report summarizes the mathematical and pedagogical content of the 230 programs in the Square One TV library after five seasons of production, relating that content to the goals of the series which were to make mathematics more accessible, meaningful, and interesting to the children viewers.

Smith, T. E., Sells, S. P., and Clevenger, T. Ethnographic content analysis of couple and therapist perceptions in a reflecting team setting. The Journal of Marital and Family Therapy, 20 (3), 267-286.

An ethnographic content analysis was used to examine couple and therapist perspectives about the use and value of reflecting team practice. Postsession ethnographic interviews from both couples and therapists were examined for the frequency of themes in seven categories that emerged from a previous ethnographic study of reflecting teams. Ethnographic content analysis is briefly contrasted with conventional modes of quantitative content analysis to illustrate its usefulness and rationale for discovering emergent patterns, themes, emphases, and process using both inductive and deductive methods of inquiry.

Stahl, N. A. (1987). Developing college vocabulary: A content analysis of instructional materials. Reading, Research and Instruction , 26 (3).

This study investigates the extent to which the content of 55 college vocabulary texts is consistent with current research and theory on vocabulary instruction. It recommends less reliance on memorization and more emphasis on deep understanding and independent vocabulary development.

Swetz, F. (1992). Fifteenth and sixteenth century arithmetic texts: What can we learn from them? Science and Education, 1 (4).

Surveys the format and content of 15th and 16th century arithmetic textbooks, discussing the types of problems that were most popular in these early texts and briefly analyses problem contents. Notes the residual educational influence of this era's arithmetical and instructional practices.
Walsh, K., et al. (1996). Management in the public sector: a content analysis of journals. Public Administration 74 (2), 315-325.
The popularity and implementaion of managerial ideas from 1980 to 1992 are examined through the content of five journals revolving on local government, health, education and social service. Contents were analyzed according to commercialism, user involvement, performance evaluation, staffing, strategy and involvement with other organizations. Overall, local government showed utmost involvement with commercialism while health and social care articles were most concerned with user involvement.

For Further Reading

Abernethy, A. M., & Franke, G. R. (1996).The information content of advertising: a meta-analysis. Journal of Advertising, Summer 25 (2) , 1-18.

Carley, K., & Palmquist, M. (1992). Extracting, representing and analyzing mental models. Social Forces , 70 (3), 601-636.

Fan, D. (1988). Predictions of public opinion from the mass media: Computer content analysis and mathematical modeling . New York, NY: Greenwood Press.

Franzosi, R. (1990). Computer-assisted coding of textual data: An application to semantic grammars. Sociological Methods and Research, 19 (2), 225-257.

McTavish, D.G., & Pirro, E. (1990) Contextual content analysis. Quality and Quantity , 24 , 245-265.

Palmquist, M. E. (1990). The lexicon of the classroom: language and learning in writing class rooms . Doctoral dissertation, Carnegie Mellon University, Pittsburgh, PA.

Palmquist, M. E., Carley, K.M., and Dale, T.A. (1997). Two applications of automated text analysis: Analyzing literary and non-literary texts. In C. Roberts (Ed.), Text Analysis for the Social Sciences: Methods for Drawing Statistical Inferences from Texts and Tanscripts. Hillsdale, NJ: Lawrence Erlbaum Associates.

Roberts, C.W. (1989). Other than counting words: A linguistic approach to content analysis. Social Forces, 68 , 147-177.

Issues in Content Analysis

Jolliffe, L. (1993). Yes! More content analysis! Newspaper Research Journal , 14 (3-4), 93-97.

The author responds to an editorial essay by Barbara Luebke which criticizes excessive use of content analysis in newspaper content studies. The author points out the positive applications of content analysis when it is theory-based and utilized as a means of suggesting how or why the content exists, or what its effects on public attitudes or behaviors may be.

Kang, N., Kara, A., Laskey, H. A., & Seaton, F. B. (1993). A SAS MACRO for calculating intercoder agreement in content analysis. Journal of Advertising, 22 (2), 17-28.

A key issue in content analysis is the level of agreement across the judgments which classify the objects or stimuli of interest. A review of articles published in the Journal of Advertising indicates that many authors are not fully utilizing recommended measures of intercoder agreement and thus may not be adequately establishing the reliability of their research. This paper presents a SAS MACRO which facilitates the computation of frequently recommended indices of intercoder agreement in content analysis.
Lacy, S. & Riffe, D. (1996). Sampling error and selecting intercoder reliability samples for nominal content categories. Journalism & Mass Communication Quarterly, 73 (4) , 693-704.
This study views intercoder reliability as a sampling problem. It develops a formula for generating sample sizes needed to have valid reliability estimates. It also suggests steps for reporting reliability. The resulting sample sizes will permit a known degree of confidence that the agreement in a sample of items is representative of the pattern that would occur if all content items were coded by all coders.

Riffe, D., Aust, C. F., & Lacy, S. R. (1993). The effectiveness of random, consecutive day and constructed week sampling in newspaper content analysis. Journalism Quarterly, 70 (1), 133-139.

This study compares 20 sets each of samples for four different sizes using simple random, constructed week and consecutive day samples of newspaper content. Comparisons of sample efficiency, based on the percentage of sample means in each set of 20 falling within one or two standard errors of the population mean, show the superiority of constructed week sampling.

Thomas, S. (1994). Artifactual study in the analysis of culture: A defense of content analysis in a postmodern age. Communication Research, 21 (6), 683-697.

Although both modern and postmodern scholars have criticized the method of content analysis with allegations of reductionism and other epistemological limitations, it is argued here that these criticisms are ill founded. In building and argument for the validity of content analysis, the general value of artifact or text study is first considered.

Zollars, C. (1994). The perils of periodical indexes: Some problems in constructing samples for content analysis and culture indicators research. Communication Research, 21 (6), 698-714.

The author examines problems in using periodical indexes to construct research samples via the use of content analysis and culture indicator research. Issues of historical and idiosyncratic changes in index subject category heading and subheadings make article headings potentially misleading indicators. Index subject categories are not necessarily invalid as a result; nevertheless, the author discusses the need to test for category longevity, coherence, and consistency over time, and suggests the use of oversampling, cross-references, and other techniques as a means of correcting and/or compensating for hidden inaccuracies in classification, and as a means of constructing purposive samples for analytic comparisons.

Citation Information

Carol Busch, Paul S. De Maret, Teresa Flynn, Rachel Kellum, Sheri Le, Brad Meyers, Matt Saunders, Robert White, and Mike Palmquist. (1994-2024). Content Analysis. The WAC Clearinghouse. Colorado State University. Available at https://wac.colostate.edu/repository/writing/guides/.

Copyright Information

Copyright © 1994-2024 Colorado State University and/or this site's authors, developers, and contributors . Some material displayed on this site is used with permission.

Research Methodologies Guide

  • Action Research
  • Bibliometrics
  • Case Studies

Content Analysis

  • Digital Scholarship This link opens in a new window
  • Documentary
  • Ethnography
  • Focus Groups
  • Grounded Theory
  • Life Histories/Autobiographies
  • Longitudinal
  • Participant Observation
  • Qualitative Research (General)
  • Quasi-Experimental Design
  • Usability Studies

Content analysis is defined as 

"the systematic reading of a body of texts, images, and symbolic matter, not necessarily from an author's or user's perspective" ( Krippendorff , 2004).

Content analysis is distinguished from other kinds of social science research in that it does not require the collection of data from people. Like documentary research, content analysis is the study of recorded information, or information which has been recorded in texts, media, or physical items. 

For more information about content analysis, review the resources below:

Books and articles

Below, a few tools and online guides that can help you start your Content Analysis research are listed. These include free online resources and resources available only through ISU Library.

  • Quantitative Content Analysis by Kate Huxley Publication Date: 2020 This entry examines quantitative content analysis, which is a method based on the systematic coding and quantification of content—be that written, visual, or oral content.
  • Qualitative Content Analysis The article describes an approach of systematic, rule guided qualitative text analysis, which tries to preserve some methodological strengths of quantitative content analysis and widen them to a concept of qualitative procedure.
  • Basic Content Analysis by Robert Philip Weber Call Number: H61 W422 1990 Publication Date: 1990

Additional Resources

  • An Introduction to Content Analysis A tutorial-type guide to content analysis from Colorado State University.
  • Overview of Content Analysis An article from the peer-reviewed online journal, Practical Assessment, Research & Evaluation by Steve Stemler of Yale University.
  • << Previous: Case Studies
  • Next: Digital Scholarship >>
  • Last Updated: Dec 19, 2023 2:12 PM
  • URL: https://instr.iastate.libguides.com/researchmethods

Numbers, Facts and Trends Shaping Your World

Read our research on:

Full Topic List

Regions & Countries

  • Publications
  • Our Methods
  • Short Reads
  • Tools & Resources

Read Our Research On:

Who is Hispanic?

Beauty pageant contestants at the Junta Hispana Hispanic cultural festival in Miami.

Debates over who is Hispanic and who is not have often fueled conversations about identity among Americans who trace their heritage to Latin America or Spain . Recently, results from the 2020 census have drawn attention to how Hispanic identity is defined and measured in the United States.

A line chart showing that the U.S. Hispanic population reached nearly 64 million in 2022.

The once-a-decade head count of all people living in the U.S. used a different approach from previous censuses to measure racial identity, which has provided new insight into how Hispanics view their racial identity. At the same time, the federal government has proposed a change to how race and ethnicity are measured in government surveys like the decennial census, bringing even more attention and debate.

So, who is considered Hispanic in the U.S. today? How exactly do the federal government and others count the Hispanic population? What role does race play in deciding who counts as Hispanic? And how do surveys incorporate various terms people use to describe their Hispanic identity, such as Latina or Latinx?

We’ll answer these common questions and others here.

To answer the question of who is Hispanic, this analysis draws on five decades of U.S. Census Bureau data and two decades of Pew Research Center surveys of Hispanic adults in the United States.

National counts of the Latino population come from the Census Bureau’s decennial census (this includes PL94-171 census data ) and official population estimates . The bureau’s American Community Survey (ACS) provides demographic details such as race, country of origin and intermarriage rates. Some ACS data was accessed through Integrated Public Use Microdata Series (IPUMS) from the University of Minnesota.

Views of Hispanic identity draw on the Center’s National Survey of Latinos (NSL), which is fielded in English and Spanish. Hispanics have taken the survey online since 2019, primarily through the American Trends Panel (ATP), which is recruited through national, random sampling of residential addresses. This way nearly all adults have a chance of selection. The survey is weighted to be representative of the U.S. Hispanic adult population by gender, Hispanic origin, partisan affiliation, education and other categories. Read more about the ATP’s methodology . The NSL was conducted by phone from 2002 to 2018.

Read further details on how the Census Bureau asked about race and ethnicity and coded responses in the 2020 census. Here is a full list of origin groups that were coded as Hispanic in the 2020 census.

How many Hispanics are in the U.S. today?

The Census Bureau estimates there were roughly 63.7 million Hispanics in the U.S. as of 2022, a new high. They made up 19% of the nation’s population .

Behind the official Census Bureau number, however, lies a long history of changing labels, shifting categories and revised question wording on census forms . That history reflects evolving cultural norms about what it means to be Hispanic or Latino in the U.S. today.

How are Hispanics counted in government surveys, public opinion polls and other studies?

Before diving into the details, keep in mind that some surveys ask about Hispanic origin and race separately, following current Census Bureau practices:

A screenshot from the 2020 Census showing how the U.S. Census Bureau determines who is Hispanic.

One way to count Hispanics is to include those who say they are Hispanic, with no exceptions – this is, you are Hispanic if you say you are. Pew Research Center uses this approach in our surveys, as do other polling firms such as Gallup and voter exit polls.

The Census Bureau largely counts Hispanics this way, too, but with some exceptions. If respondents select only the “Other Hispanic” category and write in only non-Hispanic responses such as “Irish,” the Census Bureau recodes the response as non-Hispanic.

However, beginning in 2020, it widened the lens to include a relatively small number of people who did not check a Hispanic box on the census form but answered the race question in a way that implied a Hispanic background. As a result, someone who wrote that their race is “Mexican” or “Argentinean” in the race question was counted as Hispanic, even if they did not check the Hispanic box.

From the available data, the exact number of respondents affected by this change is difficult to determine, but it appears to be about 1% of Hispanics or fewer.

How did Hispanics identify their race in the 2020 census?

In the eyes of the Census Bureau, Hispanics can be of any race, because “Hispanic” is an ethnicity and not a race. However, this distinction is subject to debate . A 2015 Center survey found that 17% of Hispanic adults said being Hispanic is mainly a matter of race, while 29% said it is mainly a matter of ancestry. Another 42% said it is mainly a matter of culture.

A bar chart showing that most Hispanics do not identify their race only as White, Black or Asian.

Nonetheless, the Census Bureau’s 2021 American Community Survey (ACS) provides the self-reported racial identity of Hispanics. For example, 22.1 million single-race Hispanics identified only as “some other race,” a group that mostly includes those who wrote in a Hispanic origin or nationality as their race. The next largest single-race group among Hispanics was White (10.2 million), followed by American Indian (1.4 million), Black (900,000) and Asian (300,000).

In addition, about 27.6 million Hispanics identified as more than one race in 2021, up from just 3 million in 2010. The sharp increase in multiracial Hispanics could be due to several factors, including changes to the census form that added space for written responses to the race question and growing racial diversity among Hispanics. The former explanation is supported by the fact that more than 25 million of the Hispanics who identified as two or more races in 2021 were coded as “some other race” (and wrote in a response) and one of the specific races (such as Black or White).

Growth in multiracial Hispanics comes primarily from those who identify as White and “some other race.” That population grew from 1.6 million to 23.7 million between 2010 and 2021. The number of Hispanics who identify as White and no other race declined from 26.7 million to 10.2 million.

Is there an official definition of Hispanic or Latino?

In 1976, the U.S. Congress passed a law that required the government to collect and analyze data for a specific ethnic group: “Americans of Spanish origin or descent.” That legislation defined this group as “Americans who identify themselves as being of Spanish-speaking background and trace their origin or descent from Mexico, Puerto Rico, Cuba, Central and South America, and other Spanish-speaking countries.” This includes 20 Spanish-speaking nations from Latin America and Spain itself, but not Portugal or Portuguese-speaking Brazil.

The Office of Management and Budget (OMB) developed standards for collecting data on Hispanics in 1977 and revised them in 1997 . Schools, public health facilities and other government entities and agencies use these standards to track how many Hispanics they serve – the primary goal of the 1976 law.

In 2023, an OMB working group sought public feedback on a proposal to combine the race and ethnicity questions asked in federal surveys, including the decennial census. The proposal would add checkboxes for “Hispanic or Latino” and “Middle Eastern or North African.” Officials hope the changes will reduce the number of Americans who choose the “Some other race” category, especially among Hispanics .

The review of the proposal is scheduled to be completed by summer 2024 . Approved changes would be implemented in the 2030 census and other Census Bureau surveys. However, it’s worth noting that public feedback has included concerns that combining the race and ethnicity questions could lead to an undercount of the nation’s Afro-Latino population .

What’s the difference between Hispanic and Latino?

“Hispanic” and “Latino” are pan-ethnic terms meant to describe – and summarize – the population of people of that ethnic background living in the U.S. In practice, the Census Bureau often uses the term “Hispanic” or “Hispanic or Latino.” We use the terms “Hispanic” and “Latino” interchangeably for this population in our work.

A bar chart that shows Hispanics describe their identity in different ways.

Some people have drawn sharp distinctions between these two terms . For example, some say that Hispanics are from Spain or from Spanish-speaking countries in Latin America, which matches the federal definition, and Latinos are people from Latin America regardless of language. In this definition, Latinos would include people from Brazil (where Portuguese is the official language) but not Spain or Portugal.

Despite this debate, the Hispanic and Latino labels are not universally embraced by the population that has been labeled, even as they are widely used. Our own surveys show a preference for other terms to describe identity. A 2019 survey found that 47% of Hispanics most often described themselves by their family’s country of origin, while 39% used the terms Latino or Hispanic and 14% most often described themselves as American.

Another Center survey in 2022 found that 53% of Hispanics prefer to describe themselves as “Hispanic,” 26% prefer “Latino,” 2% prefer “Latinx” and 18% have no preference.

Who uses ‘Latinx’?

Latinx is a pan-ethnic identity term that has emerged in recent years as an alternative to Hispanic and Latino. Some news and entertainment outlets, corporations, local governments and universities use it to describe the nation’s Hispanic population. Yet the use of Latinx is not common practice, and there is debate about its appropriateness in a gendered language like Spanish. Some critics say it ignores the gendered forms of Spanish language, while others see Latinx as a gender- and LGBTQ-inclusive term . Adding to this debate, some lawmakers have gone as far as introducing legislation to ban use of the term in government communication .

A pie chart that shows most Latino adults have not heard of the term Latinx, and few use it.

The term is not well known among the population it is meant to describe. In a 2019 Center survey , only 23% of U.S. adults who self-identified as Hispanic or Latino had heard of the term, and just 3% said they use it to describe themselves.

However, awareness and use of the term varied across subgroups of Hispanics. For example, 42% of those ages 18 to 29 said they had heard of the term, compared with 7% of those 65 and older. And among the youngest Hispanic adults, women were much more likely than men to say they use the term (14% vs. 1%).

The emergence of Latinx coincides with a global movement to introduce gender-neutral nouns and pronouns into many languages that have traditionally used male or female constructions. In the U.S., Latinx first appeared more than a decade ago, and it was added to a widely used English dictionary in 2018. Another gender-neutral pan-ethnic label, Latine , has also emerged and is largely used in Spanish.

How do factors like language, last name and parental background impact whether someone is considered Hispanic?

Many U.S. Hispanics have an inclusive view of what it means to be Hispanic. In a 2015 Center survey , 71% of Hispanic adults said speaking Spanish is not required to be considered Hispanic, and 84% said having a Spanish last name is not required. However, in a 2019 survey , 32% of Hispanic adults said having two Hispanic parents is an essential part of what being Hispanic means to them.

A chart showing that, In 2021, 3 in 10 Hispanic newlyweds married someone who is not Hispanic.

Views of Hispanic identity may change in the coming decades as broad societal changes, such as rising intermarriage rates, produce an increasingly diverse and multiracial U.S. population .

In 2021, 30% of Hispanic newlyweds married someone who is not Hispanic. The Hispanic intermarriage rate is similar to the rate for Asians (32%) but higher than the rate for Black (21%) and White (14%) newlyweds. Among Hispanic newlyweds, 40% of those born in the U.S. married someone who is not Hispanic, compared with 12% of immigrant newlyweds, according to an analysis of ACS data.

Among all married Hispanics in 2021, 21% had a spouse who is not Hispanic.

Our 2015 survey found that 15% of U.S. Hispanic adults had at least one parent who is not Hispanic. This share rose to 29% among the U.S. born and 48% among the third or higher generation – those born in the U.S. to parents who were also U.S. born.

What role does skin color play in whether someone is Hispanic?

As with race, Latinos can have many different skin tones. A 2021 survey of Latino adults showed respondents a palette of 10 skin colors and asked them to choose which one most closely resembled their own.

Latinos reported having a variety of skin tones, reflecting the diversity within the group. Eight-in-ten Latinos selected one of the four lightest skin colors, and the second-lightest was most common (28%), followed by the third- (21%) and fourth-lightest colors (17%). By contrast, only 3% selected one of the four darkest skin colors.

A majority of Latino adults (57%) say skin color shapes their daily life experiences at least somewhat. Similar shares also say having a darker skin color hurts Latinos’ ability to get ahead in the U.S. (62%) and having a lighter skin color helps Latinos get ahead (59%).

Are Afro-Latinos Hispanic?

A bar chart showing that Afro-Latinos are about 2% of the U.S. adult population and 12% of Latino adults, but almost one-in-seven do not identify as Hispanic or Latino.

Afro-Latino identity is distinct from and can exist alongside a person’s Hispanic identity. Afro-Latinos’ life experiences are shaped by race, skin tone and other factors in ways that differ from other Hispanics. While most Afro-Latinos identify as Hispanic or Latino, not all do, according to our estimates based on a survey of U.S. adults conducted in 2019 and 2020.

In 2020, about 6 million Afro-Latino adults lived in the U.S., making up about 2% of the U.S. adult population and 12% of the adult Latino population. About one-in-seven Afro-Latinos – an estimated 800,000 adults – do not identify as Hispanic.

Does country of origin or ancestry affect whether someone is Hispanic?

Similar to race and skin color, Hispanics can be of any country of origin or ancestry. However, people from certain countries may be more likely to identify as Hispanic on census forms. For example, in a Center analysis of the 2021 ACS, nearly all immigrants from several Latin American and Caribbean countries called themselves Hispanic. That included nearly 100% of those from Mexico, Cuba and El Salvador among many others; 97% of those from Venezuela; 94% from Chile; 93% from Spain; 92% from Argentina; and 88% from Panama.

Are Brazilians, Portuguese, Belizeans and Filipinos considered Hispanic?

A chart showing the estimated population of several origin groups in the U.S. increased in 2020 due to a coding error.

Officially, Brazilians are not considered Hispanic or Latino because the federal government’s definition – last revised in 1997 – applies only to those of “Spanish culture or origin.” In most cases, people who report their Hispanic or Latino ethnicity as Brazilian in Census Bureau surveys are later recategorized – or “back coded” – as not Hispanic or Latino . The same is true for people with origins in Belize, the Philippines and Portugal.

However, an error in how the Census Bureau processed data from a 2020 national survey omitted some of the coding and provides a rare window into how Brazilians living in the U.S. view their identity.

In 2020, at least 416,000 Brazilians — more than two-thirds of Brazilians in the U.S. — described themselves as Hispanic or Latino on the ACS and were mistakenly counted that way. Only 14,000 Brazilians were counted as Hispanic in 2019, and 16,000 were in 2021.

In addition, 30,000 more people with Filipino origin were counted as Hispanic or Latino in 2020 than in 2021. The number with origins in non-Hispanic Caribbean countries – including Haiti, Jamaica, Guyana and the Virgin Islands – was 28,000 higher. The number from Belize was almost 12,000 higher than in 2021, but the number with Portuguese origin was similar to other recent years.

The increase in the Hispanic population among Brazilians in 2020 was far higher than for the other groups because 70% of Brazilians considered themselves to be Hispanic or Latino – compared with 41% of Belizeans, 3% of Filipinos and 3% of those of non-Hispanic Caribbean origin.

How many people with Hispanic ancestry do not identify as Hispanic?

meaning of content analysis in research

Of the 42.7 million adults with Hispanic ancestry living in the U.S. in 2015, an estimated 5 million people, or 11%, said they do not identify as Hispanic or Latino , according to a Center survey. These people aren’t counted as Hispanic in our surveys.

Notably, Hispanic self-identification varies across immigrant generations. Among immigrants from Latin America, nearly all identify as Hispanic. But by the fourth generation, only half of people with Hispanic heritage in the U.S. identify as Hispanic.

How has the Census Bureau changed the way it counts Hispanics over time?

The Census Bureau first asked everybody in the country about Hispanic ethnicity in 1980, but it made some efforts before then to count people who today would be considered Hispanic. In the 1930 census, for example, the race question had a category for “Mexican.”

The first major attempt to estimate the size of the nation’s Hispanic population came in 1970 and prompted widespread concerns among Hispanic organizations about an undercount. A portion of the U.S. population (5%) was asked if their origin or descent was from the following categories: “Mexican, Puerto Rican, Cuban, Central or South American, Other Spanish” or “No, none of these.”

This approach indeed undercounted about 1 million Hispanics. Many second-generation Hispanics did not select one of the Hispanic groups because the question did not include terms like “Mexican American.” The question wording also resulted in hundreds of thousands of people living in the Southern or Central regions of the U.S. being mistakenly included in the “Central or South American” category.

By 1980, the current approach – in which someone is asked if they are Hispanic – had taken hold, with some changes to the question and response categories since then. In 2000, for example, the term “Latino” was added to make the question read, “Is this person Spanish/Hispanic/Latino?”

Note: This post was originally published on May 28, 2009, by Jeffrey S. Passel and Paul Taylor, former vice president of Pew Research Center. It has been updated a number of times since then.

  • Hispanic/Latino Identity
  • Racial & Ethnic Identity

Mark Hugo Lopez's photo

Mark Hugo Lopez is director of race and ethnicity research at Pew Research Center

Jens Manuel Krogstad's photo

Jens Manuel Krogstad is a senior writer and editor at Pew Research Center

Jeffrey S. Passel's photo

Jeffrey S. Passel is a senior demographer at Pew Research Center

Key facts about U.S. Latinos for National Hispanic Heritage Month

Latinos’ views of and experiences with the spanish language, 11 facts about hispanic origin groups in the u.s., how a coding error provided a rare glimpse into latino identity among brazilians in the u.s., about 6 million u.s. adults identify as afro-latino, most popular.

1615 L St. NW, Suite 800 Washington, DC 20036 USA (+1) 202-419-4300 | Main (+1) 202-857-8562 | Fax (+1) 202-419-4372 |  Media Inquiries

Research Topics

  • Age & Generations
  • Coronavirus (COVID-19)
  • Economy & Work
  • Family & Relationships
  • Gender & LGBTQ
  • Immigration & Migration
  • International Affairs
  • Internet & Technology
  • Methodological Research
  • News Habits & Media
  • Non-U.S. Governments
  • Other Topics
  • Politics & Policy
  • Race & Ethnicity
  • Email Newsletters

ABOUT PEW RESEARCH CENTER  Pew Research Center is a nonpartisan fact tank that informs the public about the issues, attitudes and trends shaping the world. It conducts public opinion polling, demographic research, media content analysis and other empirical social science research. Pew Research Center does not take policy positions. It is a subsidiary of  The Pew Charitable Trusts .

Copyright 2024 Pew Research Center

Terms & Conditions

Privacy Policy

Cookie Settings

Reprints, Permissions & Use Policy

Content Analysis

  • Reference work entry
  • pp 1258–1261
  • Cite this reference work entry

meaning of content analysis in research

  • Anat Zaidman-Zait 3  

1901 Accesses

7 Citations

Content analysis is a research method that has been used increasingly in social and health research. Content analysis has been used either as a quantitative or a qualitative research method. Over the years, it expanded from being an objective quantitative description of manifest content to a subjective interpretation of text data dealing with theory generation and the exploration of underlying meaning.

Description

Content analysis is a research method that has been used increasingly in social and health research, including quality of life and well-being. Content analysis has been generally defined as a systematic technique for compressing many words of text into fewer content categories based on explicit rules of coding (Berelson, 1952 ; Krippendorff, 1980 ; Weber, 1990 ). Historically, content analysis was defined as “the objective, systematic and quantitative description of the manifest content of communication” (Berelson, 1952 , p. 18). Initially, the manifest content was...

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Berelson, B. (1952). Content analysis in communication research . Glencoe, IL: Free Press.

Google Scholar  

Burns, N., & Grove, S. K. (2005). The practice of nursing research: Conduct, critique & utilization . St. Louis, MO: Elsevier Saunders.

Elo, S., & Kyngas, H. (2007). The qualitative content analysis process. Journal of Advanced Nursing, 62 , 107–115.

Gadermann, A. M., Guhn, M., & Zumbo, B. D. (2011). Investigating the substantive aspect of construct validity for the satisfaction with life scale adapted for children: A focus on cognitive processes. Social Indicators Research, 100 , 37–60.

Graneheim, U. H., & Lundman, B. (2004). Qualitative content analysis in nursing research: Concepts, procedures and measures to achieve trustworthiness. Nurse Education Today, 24 , 105–112.

Hsieh, H., & Shannon, S. E. (2005). Three approaches to qualitative content analysis. Qualitative Health Research, 15 , 1277–1288.

Krippendorff, K. (1980). Content analysis: An introduction to its methodology . Beverly Hills, CA: Sage.

Neundork, K. (2002). The content analysis guidebook . Thousand Oaks, CA: Sage.

Norris, C. M., & King, K. (2009). A qualitative examination of the factors that influence women’s quality of life as they live with coronary artery disease. Western Journal of Nursing Research, 31 , 513–524.

Spurgin, K. M., & Wildemuth, B. M. (2009). Content analysis. In B. Wildemuth (Ed.), Applications of social research methods to questions in information and library (pp. 297–307). Westport, CT: Libraries Unlimited.

Walsh, T. R., Irwin, D. E., Meier, A., Varni, J. W., & DeWalt, D. A. (2008). The use of focus groups in the development of the PROMIS pediatric item bank. Quality of Life Research, 17 , 725–735.

Weber, R. P. (1990). Basic content analysis . Beverly Hills, CA: Sage.

Willig, C. (2008). Introducing qualitative research in psychology . Berkshire, UK: McGraw-Hill.

Zhang, Y., & Wildemuth, B. M. (2009). Qualitative analysis of content. In B. Wildemuth (Ed.), Applications of social research methods to questions in information and library (pp. 308–319). Westport, CT: Libraries Unlimited.

Download references

Author information

Authors and affiliations.

Tel-Aviv University, Tel-Aviv, Israel

Anat Zaidman-Zait

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Anat Zaidman-Zait .

Editor information

Editors and affiliations.

University of Northern British Columbia, Prince George, BC, Canada

Alex C. Michalos

(residence), Brandon, MB, Canada

Rights and permissions

Reprints and permissions

Copyright information

© 2014 Springer Science+Business Media Dordrecht

About this entry

Cite this entry.

Zaidman-Zait, A. (2014). Content Analysis. In: Michalos, A.C. (eds) Encyclopedia of Quality of Life and Well-Being Research. Springer, Dordrecht. https://doi.org/10.1007/978-94-007-0753-5_552

Download citation

DOI : https://doi.org/10.1007/978-94-007-0753-5_552

Publisher Name : Springer, Dordrecht

Print ISBN : 978-94-007-0752-8

Online ISBN : 978-94-007-0753-5

eBook Packages : Humanities, Social Sciences and Law

Share this entry

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research
  • Open access
  • Published: 20 April 2024

“I am in favour of organ donation, but I feel you should opt-in”—qualitative analysis of the #options 2020 survey free-text responses from NHS staff toward opt-out organ donation legislation in England

  • Natalie L. Clark 1 ,
  • Dorothy Coe 2 ,
  • Natasha Newell 3 ,
  • Mark N. A. Jones 4 ,
  • Matthew Robb 4 ,
  • David Reaich 1 &
  • Caroline Wroe 2  

BMC Medical Ethics volume  25 , Article number:  47 ( 2024 ) Cite this article

227 Accesses

4 Altmetric

Metrics details

In May 2020, England moved to an opt-out organ donation system, meaning adults are presumed to be an organ donor unless within an excluded group or have opted-out. This change aims to improve organ donation rates following brain or circulatory death. Healthcare staff in the UK are supportive of organ donation, however, both healthcare staff and the public have raised concerns and ethical issues regarding the change. The #options survey was completed by NHS organisations with the aim of understanding awareness and support of the change. This paper analyses the free-text responses from the survey.

The #options survey was registered as a National Institute of Health Research (NIHR) portfolio trial [IRAS 275992] 14 February 2020, and was completed between July and December 2020 across NHS organisations in the North-East and North Cumbria, and North Thames. The survey contained 16 questions of which three were free-text, covering reasons against, additional information required and family discussions. The responses to these questions were thematically analysed.

The #options survey received 5789 responses from NHS staff with 1404 individuals leaving 1657 free-text responses for analysis. The family discussion question elicited the largest number of responses (66%), followed by those against the legislation (19%), and those requiring more information (15%). Analysis revealed six main themes with 22 sub-themes.

Conclusions

The overall #options survey indicated NHS staff are supportive of the legislative change. Analysis of the free-text responses indicates that the views of the NHS staff who are against the change reflect the reasons, misconceptions, and misunderstandings of the public. Additional concerns included the rationale for the change, informed decision making, easy access to information and information regarding organ donation processes. Educational materials and interventions need to be developed for NHS staff to address the concepts of autonomy and consent, organ donation processes, and promote family conversations. Wider public awareness campaigns should continue to promote the positives and refute the negatives thus reducing misconceptions and misunderstandings.

Trial registration

National Institute of Health Research (NIHR) [IRAS 275992].

Peer Review reports

In England May 2020, Max and Kiera’s Law, also known as the Organ Donation (Deemed Consent) Bill, came into effect [ 1 , 2 ]. This means adults in England are now presumed to have agreed to deceased organ donation unless they are within an excluded group, have actively recorded their decision to opt-out of organ donation on the organ donor register (ODR), or nominated an individual to make the decision on their behalf [ 1 , 2 ]. The rationale for the legislative change is to improve the organ donation rates and reduce the shortage of organs available to donate following brain or circulatory death within the UK [ 2 , 3 , 4 ]. This is particularly important considering the growing number of patients awaiting a transplant. Almost 7000 patients were waiting in the UK at the end of March 2023 [ 5 ]. Wales was the first to make the legislative change in December 2015, followed by Scotland in March 2021 and lastly Northern Ireland in June 2023 [ 2 ]. Following the change in Wales, consent rates had increased from 58% in 2015/16 to 77% in 2018/19 [ 6 ], suggesting the opt-out system can significantly increase consent, though it further suggests that it might take a few years to fully appreciate the impact [ 7 , 8 ]. Spain, for example, has had an opt-out legislation since 1979 with increases in organ donation seen 10 years later [ 9 ].

Research, however, has raised concerns from both the public and healthcare staff regarding the move to an opt-out system. These concerns predominantly relate to a loss of freedom and individual choice [ 9 , 10 ], as well as an increased perception of state ownership of organs [ 10 , 11 , 12 ] after death. Healthcare staff additionally fear of a loss of trust and a damaged relationship with their patients [ 9 , 11 ]. These concerns are frequently linked to emotional and attitudinal barriers towards organ donation, understanding and acceptance [ 9 ]. Four often referenced barriers include (1) jinx factor: superstitious beliefs [ 13 , 14 , 15 ]; (2) ick factor: feelings of disgust related to donating [ 13 , 14 , 15 ]; (3) bodily integrity: body must remain intact [ 13 , 14 , 15 ]; (4) medical mistrust: believing doctors will not save the life of someone on the ODR [ 13 , 14 , 15 ]. The latter barrier is mostly reported by the general public in countries with opt-out systems [ 13 , 14 , 16 ] although medical mistrust does feature as a barrier across all organ donation systems. In addition, it is a reported barrier healthcare staff believe will occur in the UK under an opt-out system [ 9 , 16 ].

Deceased donation from ethnic minority groups is low in the UK, with family consent being a predominant barrier in these groups. Consent rates are 35% for ethnic minority eligible donors compared to 65% for white eligible donors [ 5 ]. The reasons for declining commonly relate to being uncertain of the person’s wishes and believing it was against their religious/cultural beliefs. Healthcare staff, particularly in the intensive care setting, have expressed a lack of confidence in communication and supporting ethnic minority groups because of language barriers and differing religious/cultural beliefs to their own [ 17 ]. However, one study has highlighted that generally all religious groups are in favour of organ donation with respect to certain rules and processes. Therefore, increasing knowledge amongst healthcare staff of differing religious beliefs would improve communication and help to sensitively support families during this difficult time [ 18 , 19 ]. However, individually and combined, the attitudinal barriers, concerns towards an opt-out system, and lack of understanding about ethnic minority groups, can have a significant impact within a soft opt-out system whereby the family are still approached about donation and can veto if they wish [ 11 , 12 , 20 ].

The #options survey [ 21 ] was completed online by healthcare staff from National Health Service (NHS) organisations in North-East and North Cumbria (NENC) and North Thames. The aim was to gain an understanding of the awareness and support to the change in legislation. The findings of the survey suggested that NHS staff are more aware, supportive, and proactive about organ donation than the general public, including NHS staff from religious and ethnic minority groups. However, there were still a number who express direct opposition to the change in legislation due to personal choice, views surrounding autonomy, misconceptions or lack of information. This paper will focus on the qualitative analysis of free-text responses to three questions included in the #options survey. It aims to explore the reasons for being against the legislation, what additional information they require to make a decision, and why had they not discussed their organ donation decision with their family. It will further explore a subset analysis of place of work, ethnicity, and misconceptions. The findings will aid suggestions for future educational and engagement work.

Design, sample and setting

The #options survey was approved as a clinical research study through the integrated research application system (IRAS) and registered as a National Institute of Health Research (NIHR) portfolio trial [IRAS 275992]. The survey was based on a previously used public survey [ 22 ] and peer reviewed by NHS Blood and Transplant (NHSBT). The free-text responses used in #options were an addition to the closed questions used in both the #options and the public survey. Due to the COVID-19 pandemic, the start of the survey was delayed by 4 months, opening for responses between July to December 2020. All NHS organisations in the NENC and North Thames were invited to take part. Those that accepted invitations were supplied with a communication package to distribute to their staff. All respondents voluntarily confirmed their agreement to participate in the survey at the beginning. The COnsolidated criteria for REporting Qualitative research (COREQ) checklist was used to guide analysis and reporting of findings [ 23 ], see Supplementary material 1 .

Data collection and analysis

The survey contained 16 questions, including a brief description of the change in legislation. The questions consisted of demographic details (age, sex, ethnicity, religion), place of work, and if the respondent had contact with or worked in an area offering support to donors and recipients. Three of the questions filtered to a free-text response, see Supplementary material 2 . These responses were transferred to Microsoft Excel to be cleaned and thematically analysed by DC. Thematic analysis was chosen to facilitate identification of groups and patterns within large datasets [ 24 ]. Each response was read multiple times to promote familiarity and initially coded. Following coding, they were reviewed to allow areas of interest to form and derive themes and sub-themes. Additional subsets were identified and analysed to better reflect and contrast views. This included, at the request of NHSBT, the theme of ‘misconceptions’. The themes were reviewed within the team (DC, CW, NK, NC, MJ) and shared with NHSBT. Any disagreements were discussed and agreed within the team.

Overall, the #options survey received 5789 responses from NHS staff. The COVID-19 pandemic further impacted on NHS organisations from North Thames to participate, resulting in respondents predominantly being from NENC (86%). Of the respondents, 1404 individuals (24%) left 1657 free-text responses for analysis. The family discussion question elicited the largest number of responses, accounting for 66% of the responses ( n  = 1088), followed by against the legislation at 19% ( n  = 316) and more information needed at 15% ( n  = 253). The responses to the against legislation question provided the richest data as they contained the most information. Across the three questions, there were six main themes and 22 sub-themes, see Table  1 . The large number of free-text responses illustrate the multifaceted nature of individuals views with many quotes containing overlap between themes and sub-themes.

Respondent characteristics

In comparison to the whole #options survey respondents, the free-text response group contained proportionally more males (21% vs 27%), less females (78% vs 72%), and marginally more 18–24year-olds (7% vs 8%), respectively. There were 5% more 55 + year olds in the free-text group, however all other age groups were between 2–3% lower when compared to the whole group. Additionally, the free-text group were more ethnically diverse than the whole group (6.9% vs 15.4%), with all named religions also having a higher representation (3.9% vs 7.3%), respectively.

Question one: I am against the legislation – Can you help us understand why you are against this legislation?

Of the three questions, this elicited the largest number of responses from males ( n  = 94, 30%), those aged over 55 years ( n  = 103, 33%), and ethnic minority responders ( n  = 79, 25%). Subset analysis of place of employment indicates 27% were from the transplant centre ( n  = 84), 8% were from the mental health trust ( n  = 26), and 4% from the ambulance trust ( n  = 14). Thematic analysis uncovered four main themes and 12 sub-themes from the responses, with the predominant theme being a perceived loss of autonomy.

Theme one: loss of autonomy

Respondents’ reasons for a loss of autonomy were categorised into four sub-themes. Firstly, calling into question the nature of informed consent and secondly, peoples’ awareness of the legislative change. One respondent stated individuals need to be “fully aware and informed” [R2943] in order to have consented to organ donation. However, one respondent stated that they believe individuals have “not [been] informed well” [R930] and thus “if people are not aware of it, how are they making a choice on what happens to their organs” [R1166] . It was suggested that awareness of the change may have “been overshadowed by COVID” [R4119] .

Furthermore, there was concerns regarding the means to record an opt-out decision, specifically to those that are “not tech savvy” [R167] , “homeless” [R5721] , “vulnerable” [R4553] , and “elderly” [R2155] . Therefore, removing that individual’s right to record their decision due to being at a disadvantage.

Finally, respondents expressed concerns of a move to an authoritarian model of State ownership of organs. This elicited strong, negative reactions from individuals under the belief the State would own and “harvest” a person’s organs under a deemed consent approach, with some removing themselves as a donor consequently, “I am furious that the Government has decided that my organs are theirs to assign. It is MY gift to give, not theirs. I have now removed myself as a long-standing organ donor.” [R593] .

Theme two: consequences

Following respondents stating their reason for being against the legislative change, they discussed further what they believed to be the consequences of an opt-out legislation, with a focus on trust. Respondents cited a lack of trust towards the system, “I have no Trust in the UK government” [R5374] , with some surprisingly citing a lack of trust towards healthcare professionals, “Don’t trust doctors in regard to organ donation” [R3010], as well as a fear of eroding trust with the general public, “This brings the NHS Organ Donation directly into dispute with the public.” [R1237]. Respondents additionally believed the legislative change would lead to an increase in mistakes i.e., organ’s being removed against a person’s wishes by presuming, “not convinced that errors won't be made in my notifying my objection and that this won't be dealt with or handed over correctly” [R3018]. Finally, it is believed this change would also lead to, “additional upset” [R587], for already grieving families.

Theme three: legislation

Respondents were additionally against the legislation itself as they believed it lacked an evidence-base to prove it is successful at increasing the numbers of organs donated. As well as this, respondents perceived the legislation as one that removed the donor’s choice as to which organs they want to donate, some with a religious attribute “I don't mind donating but would like choice of what I like to i.e., not my cornea as for after life I want to see where I am going.” [R5274].

Theme four: religion and culture

Religion and culture was another common theme with sub-themes relating to maintaining bodily integrity following death and the lack of clarity around the definition of brain death. Many others stated that organ donation is against their religion or, were “unsure whether organ donation is permissible” [R1067].

Question two: I need more information to decide—What information would you like to help you decide?

This question elicited the most responses from females ( n  = 188, 74%), those aged over 55 years ( n  = 80, 32%), with 19% being from ethnic minority groups ( n  = 49). Subset analysis of place of employment indicates 18% were from the transplant centre ( n  = 46), 8% were from the mental health trust ( n  = 18), and 9% from the ambulance trust ( n  = 23). Thematic analysis uncovered a main theme of “everything” . There were many responses that did not specify what information was required, but indicated that more general information on organ donation was required, within this there were five sub-themes.

Sub-themes:

The first sub-theme identified a request for information around the influence a family has on the decision to donate and the information that will be provided to families. This included providing “emotional wellbeing” [R162] support, and information on whether families can “appeal against the decision” [R539] or “be consulted” [R923] following their loved one’s death. This was mainly requested by those employed by transplant centres.

The second request was for information on the “process involved after death for organ retrieval” [R171] , predominantly by ethnic minority groups and those employed by the mental health trusts, with specific requests on confirming eligibility. Other examples of requested information on the process and pathway included “how the organs will be used” [R1086] , “what will be donated” [R1629] , and “who benefits from them” [R3730] .

The third request was information regarding the publicity strategy to raise awareness of the legislative change. Many of the respondents stated they did not think there was enough “coverage in the media” [R3668]. Additional considerations of public dissemination were to ensure it was an “ easy read update” [R137 3 ] , specifically for “the elderly or those with poor understanding of English who may struggled with the process” [R1676] .

The fourth request was information around the systems in place to record a decision. There were additional requests for the opt-out processes if someone was within the excluded group and “what safeguards are in place” [R3777], as well as what if individuals change their mind and the ease of recording this new decision.

Finally, and similarly to the first question, the fifth request was information for an evidence-base. Respondents stated that they, “would like to know the reasons behind this change” [R3965] , believing that if they had a greater understanding then this might increase their support towards the legislative change.

Question three: Have you discussed your decision with a family member? If no, can you help us understand what has stopped you discussing this with your family?

The free-text responses to analyse were from those who responded “No” to, “Have you discussed your decision with a family member?”. This received 1430 responses with females ( n  = 1025, 27%) predominantly answering “No”. However, not everyone left a free-text response, leaving 1088 comments for analysis. These were predominantly made by those aged over 55 years ( n  = 268, 24%), with 5% being from ethnic minority groups ( n  = 49). Subset analysis of the 1088 responses regarding place of employment indicated 14% were from the transplant centre ( n  = 147), 7% were from the mental health trust ( n  = 78), and 9% from the ambulance trust ( n  = 96). The analysis uncovered a main theme of priority and relevance made up of five sub-themes.

The first sub-theme identified one reason to be that it was their “individual decision” [R3] and there would be “nothing to be gained” [R248] from a discussion. Some respondents stated that despite a lack of discussion, their family members would assume their wishes in relation to organ donation and support these, “I imagine they are all of the same mindset” [R4470]. However, some stated their reasons to be because they “don’t have a family” [R1127] to discuss this with or have “young ones whose understanding is limited about organ donation” [R356] . Positively, there were several respondents who suggested the question had acted as a prompt to speak to their family.

Another reason stated by respondents was that they found the topic to be too difficult to discuss due to “recent bereavements” [R444], “current environment” [R441] , and “a reluctance to address death” [R4486] . As evident in the latter quote, many respondents viewed discussions around death and dying as a “taboo subject” [R3285] , increasing the avoidance of having such conversations.

Finally, the fifth reason was that several respondents “had not made any decision yet” [R2478] . One respondent wanted to ensure they had reviewed all available information before deciding and having a well-informed discussion with them.

Misconceptions

A further subset analysis of responses coded as misconceptions was reviewed at the request of NHSBT, with interest in whether these occurred from healthcare staff working with donors and recipients. Misconceptions were identified across the three questions, with misconceptions accounting for 24% of the responses to the against the legislation question. Responses used emotive, powerful words with suggestions of State ownership of organs, abuse of the system to procure organs, change in treatment of donors to hasten death, religious and cultural objections, and recipient worthiness.

I worked in organ retrieval theatre during my career and I was uncomfortable with the way the operations were performed during this period. Although the 'brain dead' tests had been completed prior to the operation the vital signs of the patient often reflected that the patient was responding to painful stimuli. Sometimes the patient was not given the usual analgesia that is often given during routine operations. This made me rethink organ donation therefore I am uncomfortable with this. I always carried a donation card prior to my experience but subsequently would not wish to donate. This may be a personal feeling but that is my experience. [R660].
I think that this is a choice that should be left to individuals and families to make. After many years in nursing lots of it spent with transplant patients not all recipients embrace a 'healthy lifestyle' post-transplant with many going back to old lifestyle choices which made a transplant necessary in the first place. [R867].

Additional comments suggested certain medical conditions and advancing age precludes donation and that the ability to choose which organs to donate had been removed.

Most of them will be of no use as I have had a heart attack, I smoke and have Type 2 diabetes. [R595]

Further analysis indicated that 27% ( n  = 24) of these comments were made by individuals who worked with or in an area that supported donors and recipients.

In summary, this qualitative paper has evidenced that the ability to make an autonomous informed decision is foremost in the respondent’s thoughts regarding an opt-out system. This has been commonly cited as a reason throughout the literature by those against an opt-out system [ 9 , 10 , 25 , 26 ]. The loss of that ability was the primary reason for respondents being against the change in legislation with the notion that the decision is a personal choice cited as a reason for lack of discussion with family members. Respondents stated that the ability to make autonomous decisions needs to be adequately supported by evidence-based information that is accessible to all. If the latter is unavailable, they expressed concern for negative consequences. This includes an increase in the perceived belief of the potential for mistakes and abuse of the system, as well as family distress and loss of trust in the donation system and the staff who work in it, as supported by previous literature [ 9 , 11 ].

Our findings further coincide with that of previous literature, highlighting views suggesting that the opt-out system is a move towards an authoritarian system, illustrating the commercialisation of organs, and a system that is open to abuse and mistakes [ 10 , 11 , 12 , 27 , 28 , 29 ]. Healthcare staff require reassurance that the population, specifically the hard-to-reach groups like the elderly and homeless, have access to information and systems in order to be able to make an informed decision [ 30 , 31 ]. Whilst the findings from the overall #options survey demonstrated awareness is higher in NHS staff, there was a significant narrative in the free-text response regarding a lack of awareness and a concern the general public must also lack the same awareness of the system change. Some responses also reflected medical mistrust concerns of the general public [ 13 , 14 , 16 ] as well as expressing a fear of losing trust with the public [ 9 , 11 , 16 ], as found within previous work. Additional research articles raising awareness of the opt-out system in England suggest that despite publicising the change with carefully crafted positive messaging, negative views and attitudes are likely to influence interpretation leading to an increase in misinformation [ 28 ]. Targeted, evidence-based interventions and campaigns that address misinformation, particularly in sub-groups like ethnic minorities, is likely to provide reassurance to NHS staff and the general public, as well as providing reliable resources of information [ 28 ].

Respondents also requested more detailed information about the process of organ donation. The disparity of information and the knowledge of the processes of donation includes eligibility criteria, perceived religious and cultural exclusions, practical processes of brain and circulatory death, and subsequent organ retrieval. As well as, most importantly, more information on the care provided to the donor before and after the donation procedure. The gap of available factual knowledge is instead filled by misconceptions and misunderstandings which is perpetuated until new information and knowledge is acquired. It may also be attributed to the increased awareness of ethical and regulatory processes. These attitudes and views illustrate the complexity of opinions associated with religion, culture, medical mistrust, and ignorance of the donation processes [ 11 , 15 , 32 ]. There is evidently a need for healthcare staff to display openness and transparency about the processes of organ donation and how this is completed, particularly with the donor’s family. It further reinforces the need to increase the knowledge of differing religious and cultural beliefs to support conversations with families [ 18 , 19 ].

Both healthcare staff and the public would benefit from educational materials and interventions to address attitudes towards organ donation [ 19 , 28 , 33 ]. This would assist in correcting misconceptions and misunderstandings held by NHS staff, specifically those who support and work with organ donors and recipients. Previous work illustrates support for donation being higher in intensivists, recommending educational programmes to increase awareness across all healthcare staff [ 34 ]. The quantitative and qualitative findings of the #options survey would support this recommendation, adding that interventions need to be delivered by those working within organ donation and transplantation. This would build on the community work being conducted by NHSBT, hopefully leading NHS staff to become transplant ambassadors within their local communities.

A further finding was that of confusion and misunderstanding surrounding the role of the family, a finding also supported by the literature [ 11 ]. It was suggested that family distress would be heightened, and families would override the premise of opt-out. Literature also supports this could be further impacted if the family holds negative attitudes towards organ donation [ 20 ]. The uncertainty of the donors’ wishes was the most common reason for refusing from ethnic minority groups [ 35 ], further highlighting the need for family discussions. Without this, families feel they are left with no prior indication so they opt-out as a precaution. Making an opt-in decision known can aid the grieving process as the family takes comfort in knowing they are fulfilling the donors wishes [ 26 ] and reduces the likelihood of refusal due to uncertainty about their wishes [ 36 ]. The ambiguity around the role of the family, coupled with not explicitly stating a choice via the organ donor register or discussions with family can make it problematic for next of kin and NHS staff.

Limitations

It is acknowledged that the findings of this study could have been influenced by the COVID-19 pandemic beyond the changes to the research delivery plan including a shift in critical care priorities, initial increase of false information circulating social media, delayed specialist nurse training, and removal of planned public campaigns [ 37 , 38 ]. The degree of the impact is unknown and supports the view that ongoing research into healthcare staff attitudes is required. Additionally, the survey did not collect job titles and is therefore limited to combining all healthcare staff responses. It is understood not all staff, such as those working in mental health, would know in depth details of organ donation and legislation, but it is expected that their level of knowledge would be greater than that of the general public.

The quantitative analysis [ 21 ] of the #options survey showed that overall NHS staff are well informed and more supportive of the change in legislation when compared to the general public. This qualitative analysis of the free-text responses provides a greater insight into the views of the healthcare staff who against the change. The reasons given reflect the known misconceptions and misunderstandings held by the general public and evidenced within the literature [ 9 , 10 , 11 , 12 , 13 , 14 , 15 , 16 ]. There are further concerns about the rationale for the change, the nature of the informed decision making, ease of access to information including information regarding organ donation processes. We therefore propose that educational materials and interventions for NHS staff are developed to address the concepts of autonomy and consent, are transparent about organ donation processes, and address the need for conversations with family. Regarding the wider public awareness campaigns, there is a continued need to promote the positives and refute the negatives to fill the knowledge gap with evidence-based information [ 39 ] and reduce misconceptions and misunderstandings.

Availability of data and materials

The datasets analysed during the current study are available from the corresponding author on reasonable request.

Abbreviations

Coronavirus Disease 2019

Integrated research application system

North-East and North Cumbria

  • National Health Service

National Health Service Blood and Transplant

National Institute of Health Research

Organ donor register

United Kingdom

NHS Blood and Transplant. Max and Keira’s Law come into effect in England, https://www.organdonation.nhs.uk/get-involved/news/max-and-keira-s-law-comes-into-effect-in-england (Accessed 24 Feb 2021).

NHS Blood and Transplant. Organ donation laws. https://www.organdonation.nhs.uk/uk-laws/ (Accessed 26 July 2023).

Human Tissue Authority. Human Tissue Act 2004: human tissue authority. 2017. https://www.hta.gov.uk/policies/human-tissue-act-2004 (Accessed 07 Apr 2023).

NHS Blood and Transplant. ODT clinical, donation after brainstem death. 2019. https://www.odt.nhs.uk/deceased-donation/best-practice-guidance/donation-after-brainstem-death/ (Accessed 07 Apr 2023).

NHS Blood and Transplant. Organ and Tissue Donation and Transplantation: Activity Report 2022/23. https://nhsbtdbe.blob.core.windows.net/umbraco-assets-corp/30198/activity-report-2022-2023-final.pdf (Accessed 26 July 2023).

NHS Blood and Transplant. Welsh Health Minister celebrates that ‘Opt-out organ donation scheme has transformed lives’. 2020. https://www.organdonation.nhs.uk/get-involved/news/welsh-health-minister-celebrates-that-opt-out-organ-donation-scheme-has-transformed-lives/#:~:text=Consent%20rates%20for%20donation%20have,from%20180%20in%202017%2F18 (Accessed 26 July 2023).

Noyes J, McLaughlin L, Morgan K, et al. Short-term impact of introducing a soft opt-out organ donation system in Wales: before and after study. BMJ Open. 2019;9: e025159. https://doi.org/10.1136/bmjopen-2018-025159 .

Article   Google Scholar  

Madden S, Collett D, Walton P, Empson K, Forsythe J, Ingham A, Morgan K, Murphy P, Neuberger J, Gardiner D. The effect on consent rates for deceased organ donation in Wales after the introduction of an opt-out system. Anaesthesia. 2020;75(9):1146–52. https://doi.org/10.1111/anae.15055 .

Rieu R. The potential impact of an opt-out system for organ donation in the UK. Law, ethics and medicine. 2010;36:534–8. https://doi.org/10.1136/jme.2009.031757 .

Miller J, Currie S, McGregor LM, O’Carroll RE. ‘It’s like being conscripted, one volunteer is better than 10 pressed men’: A qualitative study into the views of people who plan to opt-out of organ donation. 2020; 25: 257–274. https://doi.org/10.1111/bjhp.12406

Miller J, Currie S, O’Carroll RE. ‘If I donate my organs it’s a gift, if you take them it’s theft’: a qualitative study of planned donor decisions under opt-out legislation. BMC Public Health. 2019;19:1463. https://doi.org/10.1186/s12889-019-7774-1 .

Rudge CJ. Organ donation: opting in or opting out? British Journal of General Practice. 2018: 62–63. https://doi.org/10.3399/bjgp18X694445

Morgan SE, Stephenson MT, Harrison TR, Afifi WA, Long SD. Facts versus ‘feelings’: how rational is the decision to become an organ donor? J Health Psychol. 2008;13(5):644–58. https://doi.org/10.1177/1359105308090936 .

Clark NL, Copping L, Swainston K, McGeechan GJ. Attitudes to Organ Donor Registration in England Under Opt-Out Legislation. Progress In Transplantation. 2023; 0(0). doi: https://doi.org/10.1177/15269248231189869

Miller J, Currie S, O’Carroll RE. ‘What if I’m not dead?’ – Myth-busting and organ donation. Br J Health Psychol. 2019;24:141–58. https://doi.org/10.1111/bjhp.12344 .

Organ Donation Taskforce. The potential impact of an opt out system for organ donation in the UK. The National Archives. 2008:1–36.

Morgan M, Kenten C, Deedat S, Farsides B, Newton T, Randhawa G, et al. Increasing the acceptability and rates of organ donation among ethnic groups: a programme of observational and evaluative research on Donation, Transplantation and Ethnicity (DonaTE). Program Grants Appl Res. 2016;4(4):1–196. https://doi.org/10.3310/pgfar04040 .

Doerry K, Oh J, Vincent D, Fishcer L, Schulz-Jurgensen S. Religious and cultural aspects of organ donation: Narrowing the gap through understanding different religious beliefs. Pediatr Transplant. 2022;26: e14339. https://doi.org/10.1111/petr.14339 .

Witjes M, Jansen NE, van der Hoeven JG, Abdo WF. Interventions aimed at healthcare professionals to increase the number of organ donors: a systematic review. Crit Care. 2019;23:227. https://doi.org/10.1186/s13054-019-2509-3 .

Shepherd L, O’Carroll RE, Ferguson E. Assessing the factors that influence the donation of a deceased family member’s organs in an opt-out system for organ donation. Soc Sci Med. 2023;317: 115545. https://doi.org/10.1016/j.socscimed.2022.115545 .

Coe D, Newell N, Jones M, Robb M, Clark N, Reaich D, Wroe C. NHS staff awareness, attitudes and actions towards the change in organ donation law in England – results of the #options survey 2020. Archives of Public Health. 2023;81:88. https://doi.org/10.1186/s13690-023-01099-y .

Welsh Government Social Research. Survey of Public Attitudes to Organ Donation: Wave 2. 66/2013, Welsh Government, 2013. https://gov.wales/sites/default/files/statistics-and-research/2019-04/public-attitudes-organ-donation-wave-2.pdf (Accessed 29 June 2021).

Tong A, Sainsbury P, Craig J. Consolidated criteria for reporting qualitative research (COREQ): a 32-item checklist for interviews and focus groups. Int J Qual Health Care. 2007;19(6):349–57. https://doi.org/10.1093/intqhc/mzm042 .

Braun V, Clarke V. Using thematic analysis in psychology. Qual Res Psychol. 2006;3:77–101.

Cantrell TK. The ‘opt-out’ approach to deceased organ donation in England: A misconceived policy which may precipitate moral harm. Clin Ethics. 2019;14:63–9. https://doi.org/10.1177/1477750919851052 .

Hoeyer K, Jensen AMB, Olejaz M. Transplantation as an abstract good: practising deliberate ignorance in deceased organ donation in Denmark. Sociol Health Illn. 2015;37:578–93. https://doi.org/10.1111/1467-9566.12211 .

Dalal AR. Philosophy of organ donation: Review of ethical facets. World J Transplant. 2015;5:44–51. https://doi.org/10.5500/wjt.v5.i2.44 .

Faherty G, Williams L, Noyes J, McLaughlin L, Bostock J, Mays N. Analysis of content and online public responses to media articles that raise awareness of the opt-out system of consent to organ donation in England. Front Public Health. 2022. https://doi.org/10.3389/fpubh.2022.1067635 .

Koplin JJ. From blood donation to kidney sales: the gift relationship and transplant commercialism. Monash Bioeth Rev. 2015;33:102–22. https://doi.org/10.3389/fpubh.2022.1067635 .

Morgan M, Kenten C, Deedat S, et al. Attitudes to deceased organ donation and registration as a donor among minority ethnic groups in North America and the UK: a synthesis of quantitative and qualitative research. Ethn Health. 2013;18:367–90. https://doi.org/10.1080/13557858.2012.752073 .

Truijens D, van Exel J. Views on deceased organ donation in the Netherlands: A q-methodology study. PLoS ONE. 2019;14: e0216479. https://doi.org/10.1371/journal.pone.0216479 .

Irving MJ, Tong A, Jan S, Cass A, Rose J, Chadban S, Allen RD, Craig JC, Wong G, Howard K. Factors that influence the decision to be an organ donor: a systematic review of the qualitative literature. Nephrol Dial Transplant. 2012;27:2526–33. https://doi.org/10.1093/ndt/gfr683 .

Radunz S, Hertel S, Schmid KW, Heuer M, Stommel P, Fruhauf NR, Saner FH, Paul A, Kaiser GM. Attitude of Health Care Professionals to Organ Donation: Two Surveys Among the Staff of a German University Hospital. Transplant Proc. 2010;42:126–9. https://doi.org/10.1016/j.transproceed.2009.12.034 .

Umana E, Grant O, Curran E, May P, Mohamed A, O’Donnell J. Attitudes and knowledge of healthcare professionals regarding organ donation. A survey of the Saolta University health care group. Ir Med J. 2018;111:838.

Google Scholar  

Daga S, Patel R, Howard D, et al. ‘Pass it on’ - New Organ Donation Law in England May 2020: What Black, Asian or Minority Ethnic (BAME) Communities should do and Why? The Physician; 6. Epub ahead of print 5 May 2020. DOI: https://doi.org/10.38192/1.6.1.7 .

NHS Organ Donation. The UK Opt-Out Experience. https://www.youtube.com/watch?v=22oCq5NKoiE&t=3211s . YouTube 25 February 2021. Accessed 05 Dec 2023.

UK Parliament, House of Lords, Lords Chamber, Volume 803: debated 18 May 2020. Draft Human Tissue (Permitted Material: Exceptions) (England) Regulations 2020. https://hansard.parliament.uk/lords/2020-05-18/debates/1a7747af-1951-4289-b8d5-639493c85bb1/LordsChamber . 18 May 2020. Accessed 05 Dec 2023.

Miller J, McGregor L, Currie S, O’Carroll RE. Investigating the effects of threatening language, message framing, and reactance in opt-out organ donation campaigns. Ann Behav Med. 2022. https://doi.org/10.1093/abm/kaab017 .

Download references

Acknowledgements

With thanks to the NHSBT legislation implementation team for peer review of the questionnaire and the Kantar population survey data.

Funding for the project was gained from the Northern Counties Kidney Research Fund. Grant number 16.01.

Author information

Authors and affiliations.

South Tees Hospitals NHS Foundation Trust, Middlesbrough, North Yorkshire, England, UK

Natalie L. Clark & David Reaich

Newcastle-Upon-Tyne Hospitals NHS Foundation Trust, Newcastle Upon Tyne, Tyne and Wear, England, UK

Dorothy Coe & Caroline Wroe

Centre for Process Innovation, Sedgefield, County Durham, England, UK

Natasha Newell

NHS Blood and Transplant, Bristol, England, UK

Mark N. A. Jones & Matthew Robb

You can also search for this author in PubMed   Google Scholar

Contributions

NC, DC, and CW were responsible for the drafting and revising of the manuscript. NN, MJ, MR, DR, and CW were responsible for the design of the study. DC completed the qualitative analysis. NC, DC, NN, MJ, MR, DR, and CW read and approved the final manuscript.

Corresponding author

Correspondence to Caroline Wroe .

Ethics declarations

Ethics approval and consent to participate.

The research was carried out in accordance with the Declaration of Helsinki. The study was reviewed and approved by a Health Research Authority (HRA) and Health and Care Research Wales (HCRW) [REC reference: 20/HRA/0150] via the integrated research application system (IRAS) and registered as a National Institute of Health Research (NIHR) portfolio trial [IRAS 275992]. Informed Consent was obtained from all the participants and/or their legal guardians.

Consent for publication

Not applicable.

Competing interests

The authors declare no competing interests.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Supplementary material 1., supplementary material 2., rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Clark, N.L., Coe, D., Newell, N. et al. “I am in favour of organ donation, but I feel you should opt-in”—qualitative analysis of the #options 2020 survey free-text responses from NHS staff toward opt-out organ donation legislation in England. BMC Med Ethics 25 , 47 (2024). https://doi.org/10.1186/s12910-024-01048-6

Download citation

Received : 19 September 2023

Accepted : 17 April 2024

Published : 20 April 2024

DOI : https://doi.org/10.1186/s12910-024-01048-6

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Organ donation
  • Legislation
  • Qualitative

BMC Medical Ethics

ISSN: 1472-6939

meaning of content analysis in research

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • My Account Login
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Open access
  • Published: 01 April 2024

Adaptive neighborhood rough set model for hybrid data processing: a case study on Parkinson’s disease behavioral analysis

  • Imran Raza 1 ,
  • Muhammad Hasan Jamal 1 ,
  • Rizwan Qureshi 1 ,
  • Abdul Karim Shahid 1 ,
  • Angel Olider Rojas Vistorte 2 , 3 , 4 ,
  • Md Abdus Samad 5 &
  • Imran Ashraf 5  

Scientific Reports volume  14 , Article number:  7635 ( 2024 ) Cite this article

238 Accesses

Metrics details

  • Computational biology and bioinformatics
  • Machine learning

Extracting knowledge from hybrid data, comprising both categorical and numerical data, poses significant challenges due to the inherent difficulty in preserving information and practical meanings during the conversion process. To address this challenge, hybrid data processing methods, combining complementary rough sets, have emerged as a promising approach for handling uncertainty. However, selecting an appropriate model and effectively utilizing it in data mining requires a thorough qualitative and quantitative comparison of existing hybrid data processing models. This research aims to contribute to the analysis of hybrid data processing models based on neighborhood rough sets by investigating the inherent relationships among these models. We propose a generic neighborhood rough set-based hybrid model specifically designed for processing hybrid data, thereby enhancing the efficacy of the data mining process without resorting to discretization and avoiding information loss or practical meaning degradation in datasets. The proposed scheme dynamically adapts the threshold value for the neighborhood approximation space according to the characteristics of the given datasets, ensuring optimal performance without sacrificing accuracy. To evaluate the effectiveness of the proposed scheme, we develop a testbed tailored for Parkinson’s patients, a domain where hybrid data processing is particularly relevant. The experimental results demonstrate that the proposed scheme consistently outperforms existing schemes in adaptively handling both numerical and categorical data, achieving an impressive accuracy of 95% on the Parkinson’s dataset. Overall, this research contributes to advancing hybrid data processing techniques by providing a robust and adaptive solution that addresses the challenges associated with handling hybrid data, particularly in the context of Parkinson’s disease analysis.

Similar content being viewed by others

meaning of content analysis in research

Soft ordered double quantitative approximations based three-way decisions and their applications

meaning of content analysis in research

Hybrid similarity relation based mutual information for feature selection in intuitionistic fuzzy rough framework and its applications

meaning of content analysis in research

A dynamic fuzzy rule-based inference system using fuzzy inference with semantic reasoning

Introduction.

The advancement of technology has facilitated the accumulation of vast amounts of data from various sources such as databases, web repositories, and files, necessitating robust tools for analysis and decision-making 1 , 2 . Data mining, employing techniques such as support vector machine (SVM), decision trees, neural networks, clustering, fuzzy logic, and genetic algorithms, plays a pivotal role in extracting information and uncovering hidden patterns within the data 3 , 4 . However, the complexity of the data landscape, characterized by high dimensionality, heterogeneity, and non-traditional structures, renders the data mining process inherently challenging 5 , 6 . To tackle these challenges effectively, a combination of complementary and cooperative intelligent techniques, including SVM, fuzzy logic, probabilistic reasoning, genetic algorithms, and neural networks, has been advocated 7 , 8 .

Hybrid intelligent systems, amalgamating various intelligent techniques, have emerged as a promising approach to enhance the efficacy of data mining. Adaptive neuro-fuzzy inference systems (ANFIS) have laid the groundwork for intelligent systems in data mining techniques, providing a foundation for exploring complex data relationships 7 , 8 . Moreover, the theory of rough sets has found practical application in tasks such as attribute selection, data reduction, decision rule generation, and pattern extraction, contributing to the development of intelligent systems for knowledge discovery 7 , 8 . Extracting meaningful knowledge from hybrid data, which encompasses both categorical and numerical data, presents a significant challenge. Two predominant strategies have emerged to address this challenge 9 , 10 . The first strategy involves employing numerical data processing techniques such as Principal Component Analysis (PCA) 11 , 12 , Neural Networks 13 , 14 , 15 , 16 , and SVM 17 . However, this approach necessitates converting categorical data into numerical equivalents, leading to a loss of contextual meaning 18 , 19 . The second strategy leverages rough set theory alongside methods tailored for categorical data. Nonetheless, applying rough set theory to numerical data requires a discretization process, resulting in information loss 20 , 21 . Numerous hybrid data processing methods have been proposed, combining rough sets and fuzzy sets to handle uncertainty 22 , 23 , 24 , 25 , 26 , 27 , 28 , 29 , 30 , 31 , 32 , 33 , 34 , 35 , 36 , 37 , 38 , 39 , 40 , 41 . However, selecting an appropriate rough set model for a given dataset necessitates exploring the inherent relationships among existing models, presenting a challenge for users. The selection and utilization of an appropriate model in data mining thus demand qualitative and quantitative comparisons of existing hybrid data processing models.

This research endeavors to present a comprehensive analysis of hybrid data processing models, with a specific focus on those rooted in neighborhood rough sets (NRS). By investigating the inherent interconnections among these models, this study aims to elucidate their complex dynamics. To address the challenges posed by hybrid data, a novel hybrid model founded on NRS is introduced. This model enhances the efficiency of the data mining process without discretization mitigating information loss and ambiguity in data interpretation. Notably, the adaptability of the proposed model, particularly in adjusting the threshold value governing the neighborhood approximation space, ensures optimal performance aligned with dataset characteristics while maintaining high accuracy. A dedicated testbed tailored for Parkinson’s patients is developed to evaluate the real-world effectiveness of the proposed approach. Furthermore, a rigorous evaluation of the proposed model is conducted, encompassing both accuracy and overall effectiveness. Encouragingly, the results demonstrate that the proposed scheme surpasses alternative approaches, adeptly managing both numerical and categorical data through an adaptive framework.

The major contributions, listed below, collectively emphasize the innovative hybrid data processing model, the adaptive nature of its thresholding mechanism, and the empirical validation using a Parkinson’s patient testbed, underscoring the relevance and significance of the study’s findings.

Novel Hybrid Data Processing Model: This research introduces a novel hybrid data processing model based on NRS, preserving the practical meaning of both numerical and categorical data types. Unlike conventional methods, it minimizes information loss while optimizing interpretability. The proposed distance function combines Euclidean and Levenshtein distances with weighted calculations and dynamic selection mechanisms to enhance accuracy and realism in neighborhood approximation spaces.

Adaptive Thresholding Mechanism: Another key contribution is the integration of an adaptive thresholding mechanism within the hybrid model. This feature dynamically adjusts the threshold value based on dataset characteristics, ensuring optimal performance and yielding more accurate and contextually relevant results.

Empirical Validation through Parkinson’s Testbed: This research provides a dedicated testbed for analyzing behavioral data from Parkinson’s patients, allowing rigorous evaluation of the proposed hybrid data processing model. Utilizing real-world datasets enhances the model’s practical applicability and advances knowledge in medical data analysis and diagnosis.

The subsequent structure of the paper unfolds as follows: section “ Related work ” delves into the related work. The proposed model is introduced in section “ Adaptive neighborhood rough set model ”, Section “ Instrumentation ” underscores the instrumentation aspect, section “ Result and discussion ” unfolds the presentation of results and ensuing discussions, while section “ Conclusion and future work ” provides the concluding remarks for the paper. A list of notations used in this study is provided in Table  1 .

Related work

Rough set-based approaches have been utilized in various applications like bankruptcy prediction 42 , attribute/feature subset selection 43 , 44 , cancer prediction 45 , 46 , etc. In addition, recently, several innovative hybrid models have emerged, blending the realms of fuzzy logic and non-randomized systems (NRSs). One such development is presented by Yin et al. 47 , who introduce a parameterized hybrid fuzzy similarity relation. They apply this relation to the task of granulating multilabel data, subsequently extending it to the domain of multilabel learning. To construct a noise-tolerant multilabel fuzzy NRS model (NT-MLFNRS), they leverage the inclusion relationship between fuzzy neighborhood granules and fuzzy decisions. Building upon NT-MLFNRS, Yin et al. also devise a noise-resistant heuristic multilabel feature selection (NRFSFN) algorithm. To further enhance the efficiency of feature selection and address the complexities associated with handling large-scale multilabel datasets, they culminate their efforts by introducing an efficient extended version of NRFSFN known as ENFSFN.

Sang et al. 48 explore incremental feature selection methodologies, introducing a novel conditional entropy metric tailored for dynamic ordered data robustness. Their approach introduces the concept of a fuzzy dominance neighborhood rough set (FDNRS) and defines a conditional entropy metric with robustness, leveraging the FDNRS model. This metric serves as an evaluation criterion for features, and it is integrated into a heuristic feature selection algorithm. The resulting incremental feature selection algorithm is built upon this innovative model

Wang et al. 19 introduced the Fuzzy Rough Iterative Computational (FRIC) model, addressing challenges in hybrid information systems (HIS). Their framework includes a specialized distance function for object sets, enhancing object differentiation precision within HIS. Utilizing this function, they establish fuzzy symmetric relations among objects to formulate fuzzy rough approximations. Additionally, they introduce evaluation functions like fuzzy positive regions, dependency functions, and attribute importance functions to assess classification capabilities of attribute sets. They developed an attribute reduction algorithm tailored for hybrid data based on FRIC principles. This work contributes significantly to HIS analysis, providing a robust framework for data classification and feature selection in complex hybrid information systems.

Xu et al. 49 introduced a novel Fitting Fuzzy Rough Set (FRS) model enriched with relative dependency complement mutual information. This model addresses challenges related to data distribution and precision enhancement of fuzzy information granules. They utilized relative distance to mitigate the influence of data distribution on fuzzy similarity relationships and introduced a fitting fuzzy neighborhood radius optimized for enhancing the precision of fuzzy information granules. Within this model, the authors conducted a comprehensive analysis of information uncertainty, introducing definitions of relative complement information entropy and formulating a multiview uncertainty measure based on relative dependency complement mutual information. This work significantly advances our understanding of managing information uncertainty within FRS models, making a valuable contribution to computational modeling and data analysis.

Jiang et al. 50 presented an innovative approach for multiattribute decision-making (MADM) rooted in PROMETHEE II methodologies. Building upon the NRS model, they introduce two additional variants of covering-based variable precision fuzzy rough sets (CVPFRSs) by applying fuzzy logical operators, specifically type-I CVPFRSs and type-II CVPFRSs. In the context of MADM, their method entails the selection of medicines using an algorithm that leverages the identified features.

Qu et al. 51 introduced the concept of Adaptive Neighborhood Rough Sets (ANRSs), aiming for effective integration of feature separation and linkage with classification. They utilize the mRMR-based Feature Selection Algorithm (FSRMI), demonstrating outstanding performance across various selected datasets. However, it’s worth noting that FSRMI may not consistently outperform other algorithms on all datasets.

Xu et al. 52 introduced the Fuzzy Neighborhood Joint Entropy Model (FNSIJE) for feature selection, leveraging fuzzy neighborhood self-information measures and joint entropy to capture combined feature information. FNSIJE comprehensively analyzes the neighborhood decision system, considering noise, uncertainty, and ambiguity. To improve classification performance, the authors devised a new forward search method. Experimental results demonstrated the effectiveness of FNSIJE-KS, efficiently selecting fewer features for both low-dimensional UCI datasets and high-dimensional gene datasets while maintaining optimal classification performance. This approach advances feature selection techniques in machine learning and data analysis.

In 53 , the authors introduced a novel multi-label feature selection method utilizing fuzzy NRS to optimize classification performance in multi-label fuzzy neighborhood decision systems. By combining the NRS and FRS models a Multi-Label Fuzzy NRS model is introduced. They devised a fuzzy neighborhood approximation accuracy metric and crafted a hybrid metric integrating fuzzy neighborhood approximate accuracy with fuzzy neighborhood conditional entropy for attribute importance evaluation. Rigorous evaluation of their methods across ten diverse multi-label datasets showcased significant progress in multi-label feature selection techniques, promising enhanced classification performance in complex multi-label scenarios.

Sanget et al. 54 introduced the Fuzzy Dominance Neighborhood Rough Set (NRS) model for Interval-Valued Ordered Decision Systems (IvODS), along with a robust conditional entropy measure to assess monotonic consistency within IvODS. They also presented two incremental feature selection algorithms. Experimental results on nine publicly available datasets showcased the robustness of their proposed metric and the effectiveness and efficiency of the incremental algorithms, particularly in dynamic IvODS updates. This research significantly advances the application of fuzzy dominance NRS models in IvODS scenarios, providing valuable insights for data analysis and decision-making processes.

Zheng et al. 55 generalized the FRSs using axiomatic and constructive approaches. A pair of dual generalized fuzzy approximation operators is defined using arbitrary fuzzy relation in the constructive approach. Different classes of FRSs are characterized using different sets of axioms. The postulates governing fuzzy approximation operators ensure the presence of specific categories of fuzzy relations yielding identical operators. Using a generalized FRS model, Hu et al. 18 introduced an efficient algorithm for hybrid attribute reduction based on fuzzy relations constructing a forward greedy algorithm for hybrid attribute reduction resulting in optimal classification performance with lesser selected features and higher accuracy. Considering the similarity between two objects, Wang et al. 36 redefine fuzzy upper and lower approximations. The existing concepts of knowledge reduction are extending fuzzy environment resulting in a heuristic algorithm to learn fuzzy rules.

Gogoi et al. 56 use rough set theory for generating decision rules from inconsistent data. The proposed scheme uses indiscernibility relation to find inconsistencies in the data generating minimized and non-redundant rules using lower and upper approximations. The proposed scheme is based on the LEM2 algorithm 57 which performs the local covering option for generating minimum and non-redundant sets of classification rules and does not consider the global covering. The scheme is evaluated on a variety of data sets from the UCI Machine Learning Repository. All these data sets are either categorical or numerical having variable feature spaces. The proposed scheme performs consistently better for categorical data sets, as it is designed to handle inconsistencies in the data having at least one inconsistency. Results show that the proposed scheme generates minimized rule without reducing the feature space unlike other schemes, which compromise the feature space.

In 58 , the authors introduced a novel NRS model to address attribute reduction in noisy systems with heterogeneous attributes. This model extends traditional NRS by incorporating tolerance neighborhood relation and probabilistic theory, resulting in more comprehensive information granules. It evaluates the significance of heterogeneous attributes by considering neighborhood dependency and aims to maximize classification consistency within selected feature spaces. The feature space reduction algorithm employs an incremental approach, adding features while preserving maximal dependency in each round and halting when a new feature no longer increases dependency. This approach selects fewer features than other methods while achieving significantly improved classification performance, demonstrating its effectiveness in attribute reduction for noisy systems.

Zhu et al. 59 propose a fault tolerance scheme combining kernel method, NRS, and statistical features to adaptively select sensitive features. They employ a Gaussian kernel function with NRS to map fault data to a high-dimensional space. Their feature selection algorithm utilizes the hyper-sphere radius in high-dimensional feature space as the neighborhood value, selecting features based on significance measure regardless of the classification algorithm. A wrapper deploys a classification algorithm to evaluate selected features, choosing a subset for optimal classification. Experimental results demonstrate precise determination of the neighborhood value by mapping data into a high-dimensional space using the kernel function and hyper-sphere radius. This methodology proficiently selects sensitive fault features, diagnoses fault types, and identifies fault degrees in rolling bearing datasets.

A neighborhood covering a rough set model for the fuzziness of decision systems is proposed that solves the problem of hybrid decision systems having both fuzzy and numerical attributes 60 . The fuzzy neighborhood relation measures the indiscernibility relation and approximates the universe space using information granules, which deal with fuzzy attributes directly. The experimental results evaluate the influence of neighborhood operator size on the accuracy and attribute reduction of fuzzy neighborhood rough sets. The attribute reduction increases with the increase in the threshold size. A feature will not distinguish any samples and cannot reduce attributes if the neighborhood operator exceeds a certain value.

Hou et al. 61 applied NRS reduction techniques to cancer molecular classification, focusing on gene expression profiles. Their method introduced a novel perspective by using gene occurrence probability in selected gene subsets to indicate tumor classification efficacy. Unlike traditional methods, it integrated both Filters and Wrappers, enhancing classification performance while being computationally efficient. Additionally, they developed an ensemble classifier to improve accuracy and stability without overfitting. Experimental results showed the method achieved high prediction accuracy, identified potential cancer biomarkers, and demonstrated stability in performance.

Table  2 gives a comparison of existing rough set-based schemes for quantitative and qualitative analysis. The comparative parameters include handling hybrid data, generalized NRS, attribute reduction, classification, and accuracy rate. Most of the existing schemes do not handle hybrid data sets without discretization resulting in information loss and a lack of practical meanings. Another parameter to evaluate the effectiveness of the existing scheme is the ability to adapt the threshold value according to the given data sets. Most of the schemes do not adapt threshold values for neighborhood approximation space resulting in variable accuracy rates for different datasets. The end-user has to adjust the value of the threshold for different datasets without understanding its impact in terms of overfitting. Selecting a large threshold value will result in more global rules resulting in poor accuracy. There needs to be a mechanism to adaptively choose the value of the threshold considering both the global and local information without compromising on the accuracy rate. The schemes are also evaluated for their ability to attribute reduction using NRS. This can greatly improve processing time and accuracy by not considering insignificant attributes. The comparative analysis shows that most of the NRS-based existing schemes perform better than many other well-known schemes in terms of accuracy. Most of these schemes have a higher accuracy rate than CART, C4.5, and k NN. This makes the NRS-based schemes a choice for attribute reduction and classification.

Adaptive neighborhood rough set model

The detailed analysis of existing techniques highlights the need for a generalized NRS-based classification technique to handle both categorical and numerical data. The proposed NRS-based techniques not only handle the hybrid information granules but also dynamically select the threshold \(\delta \) producing optimal results with a high accuracy rate. The proposed scheme considers a hybrid tuple \(HIS=\langle U_h,\ Q_h,\ V,\ f \rangle \) , where \(U_h\) is nonempty set of hybrid records \(\{x_{h1},\ x_{h2},\ x_{h3},\ \ldots ,\ x_{hn}\}\) , \(Q_h=\left\{ q_{h1},\ q_{h2},\ \ q_{h3},\ \ldots \,\ q_{hn}\right\} \) is the non-empty set of hybrid features. \( V_{q_h}\) is the domain of attribute \(q_h\) and \(V=\ \cup _{q_h\in Q_h}V_{q_h}\) , and \(f=U_h\ x\ Q_h\rightarrow V\) is a total function such \(f\left( x_h,q_h\right) \in V_{q_h}\) for each \(q_h\in Q_h, x_h\in U_h\) , called information function. \(\langle U_h,\ Q_h,\ V,\ f\rangle \) is also known as a decision table if \(Q_h=C_h\cup D\) , where \(C_h\) is the set of hybrid condition attributes and D is the decision attribute.

A neighborhood relation N is calculated using this set of hybrid samples \(U_h\) creating the neighborhood approximation space \(\langle U_h,\ N\rangle \) which contains information granules \( \left\{ \delta ({x_h}_i)\big |{x_h}_i\in U_h\right\} \) based on some distance function \(\Delta \) . For an arbitrary sample \({x_h}_i\in U_h\) and \(B \subseteq C_h\) , the neighborhood \(\delta _B({x_h}_i)\) of \({x_h}_i\) in the subspace B is defined as \(\delta _B\left( {x_h}_i\right) =\{{x_h}_j\left| {x_h}_j\right. \in U_h,\ \Delta B(x_i,x_j) \le \delta \}\) . The scheme proposes a new hybrid distance function to handle both the categorical and numerical features in an approximation space.

The proposed distance function uses Euclidean distance for numerical features and Levenshtein distance for categorical features. The distance function also takes care of the significant features calculating weighted distance for both the categorical and numerical features. The proposed algorithm dynamically selects the distance function at the run time. The use of Levenshtein distance for categorical features provides precise distance for optimal neighborhood approximation space providing better results. Existing techniques add 1 to distance if two strings do not match in calculating the distance for categorical data and add 0 otherwise. This may not result in a realistic neighborhood approximation space.

The neighborhood size depends on the threshold \(\delta \) . The neighborhood will contain more samples if \(\delta \) is greater and results in more rules not considering the local information data. The accuracy rate of the NRS greatly depends on the selection of threshold values. The proposed scheme dynamically calculates the threshold value for any given dataset considering both local and global information. The threshold calculation formula is given below where \({min}_D\) is the minimum distance between the set of training samples and the test sample containing local information and \(R_D\) is the range of distance between the set of training samples and the test sample containing the global information.

The proposed scheme then calculates the lower and upper approximations given a neighborhood space \(\langle U_h, N\rangle \) for \(X \subseteq U_h\) , the lower and upper approximations of X are defined as:

Given a hybrid neighborhood decision table \(HNDT=\langle U_h,\ C_h\cup \ D, V, f\rangle \) , \(\{ X_{h1},X_{h2},\ \ldots ,\ X_{hN} \}\) are the sample hybrid subjects with decision 1 to N , \(\delta _B\left( x_{hi}\right) \) is the information granules generated by attributes \(B \subseteq C_h\) , then the lower and upper approximation is defined as:

and the boundary region of D is defined as:

The lower and upper approximation spaces are the set of rules, which are used to classify a test sample. A test sample forms its neighborhood using a lower approximation having all the rules with a distance less than a dynamically calculated threshold value. The majority voting is used in the neighborhood of a test sample to decide the class of a test sample. K-fold cross-validation is used to measure the accuracy of the proposed scheme where the value k is 10. The algorithm 1 of the proposed scheme has a time complexity of \(O(nm^{2})\) where n is the number of clients and m is the size of the categorial data.

figure a

Instrumentation

The proposed generalized rough set model has been rigorously assessed through the development of a testbed designed for the classification of Parkinson’s patients. It has also been subjected to testing using various standard datasets sourced from the University of California at Irvine machine learning data repository 63 . This research underscores the increasing significance of biomedical engineering in healthcare, particularly in light of the growing prevalence of Parkinson’s disease, which ranks as the second most common neurodegenerative condition, impacting over 1% of the population aged 65 and above 64 . The disease manifests through distinct motor symptoms like resting tremors, bradykinesia (slowness of movement), rigidity, and poor balance, with medication-related side effects such as wearing off and dyskinesias 65 .

In this study, to address the need for a reliable quantitative method for assessing motor complications in Parkinson’s patients, the data collection process involves utilizing a home-monitoring system equipped with wireless wearable sensors. These sensors were specifically deployed to closely monitor Parkinson’s patients with severe tremors in real time. It’s important to note that all patients involved in the study were clinically diagnosed with Parkinson’s disease. Additionally, before data collection, proper consent was obtained from each participant, and the study protocol was approved by the ethical committee of our university. The data collected from these sensors is then analyzed, yielding reliable quantitative information that can significantly aid clinical decision-making within both routine patient care and clinical trials of innovative treatments.

figure 1

Testbed for Parkinson’s patients.

Figure  1 illustrates a real-time Testbed designed for monitoring Parkinson’s patients. This system utilizes a tri-axial accelerometer to capture three signals, one for each axis \((x,\ y,\ and\ z)\) , resulting in a total of 18 channels of data. The sensors employed in this setup employ ZigBee (IEEE 802.15.4 infrastructure) protocol to transmit data to a computer at a sampling rate of 62.5 Hz. To ensure synchronization of the transmitted signals, a transition protocol is applied. These data packets are received through the Serial Forwarder using the TinyOS platform ( http://www.tinyos.net ). The recorded acceleration data is represented as digital signals and can be visualized on an oscilloscope. The frequency domain data is obtained by applying the Fast Fourier Transform (FFT) to the signal, resulting in an ARFF file format that is then employed for classification purposes. The experimental flowchart is shown in Fig.  2 .

figure 2

Experimental flowchart.

The real-time testbed includes various components to capture data using the Unified Parkinson’s Disease Rating Scale (UPDRS). TelosB MTM-CM5000-MSP and MTM-CM3000-MSP sensors are used to send and receive radio signals from the sensor to the PC. These sensors are based on an open-source TelosB/Tmote Sky platform, designed and developed by the University of California, Berkeley.

TelosB sensor uses the IEEE 802.15.4 wireless structure and the embedded sensors can measure temperature, relative humidity, and light. In CM3000, the USB connector is replaced with an ERNI connector that is compatible with interface modules. Also, the Hirose 51-pin connector makes this more versatile as it can be attachable to any sensor board family, and the coverage area is increased using SMA design by a 5dBi external antenna 66 . These components can be used for a variety of applications such as low-power Wireless Sensor Networks (WSN) platforms, network monitoring, and environment monitoring systems.

MTS-EX1000 sensor board is used for the amplification of the voltage/current value from the accelerometer. The EX1000 is an attachable board that supports the CMXXXX series of wireless sensors network Motes (Hirose 51-pin connector). The basic functionality of EX1000 is to connect the external sensors with CMXX00 communication modules to enhance the mote’s I/O capability and support different kinds of sensors based on the sensor type and its output signal. ADXL-345 Tri-accelerometer sensor is used to calculate body motion along x, y, and z-axis relative to gravity. It is a small, thin, low-power, 3-axis accelerometer that calculates high resolution (13-bit) measurements at up to ±16g. Its digital output, in 16-bit twos complement format, is accessible through either an SPI (3- or 4-wire) or I2C digital interface. A customized main circuit board is used having a programmed IC, registers, and transistors. Its basic functionality is to convert the digital data, accessed through the ADXL-345 sensor, into analog form and send it to MTS1000.

Result and discussion

The proposed generalized and ANRS is evaluated against different data sets taken from the machine learning data repository, at the University of California at Irvine. In addition to these common data sets, a real-time Testbed for Parkinson’s patients is also used to evaluate the proposed scheme. The hybrid data of 500 people was collected using the Testbed for Parkinson’s patients including 10 Parkinson’s patients, 20 people have abnormal and uncontrolled hand movements, and the rest of the samples were taken approximating the hand movements of Parkinson’s patients. The objective of this evaluation is to compare the accuracy rate of the proposed scheme with CART, k NN, and SVM having both simple and complex datasets containing numerical and hybrid features respectively. The results also demonstrate the selection of radius r for dynamically calculating the threshold value.

Table  3 provides the details of the datasets used for the evaluation of the proposed scheme including the training and test ratio used for evaluation in addition to data type, total number of instances, total feature, a feature considered for evaluation, and number of classes. The hybrid datasets are also selected to evaluate to performance of the proposed scheme against the hybrid feature space without discretization preventing information loss.

The accuracy of the NRS is greatly dependent on the threshold value. Most of the existing techniques do not dynamically adapt the threshold \(\delta \) value for different hybrid datasets. This results in the variant of NRS suitable for specific datasets with different threshold values. A specific threshold value may produce better results for one dataset and poor results for others requiring a more generic threshold value catering to different datasets with optimal results. The proposed scheme introduces an adaptable threshold calculation mechanism to achieve optimal results regardless of the datasets under evaluation. The radius value plays a pivotal role in forming a neighborhood, as the threshold values consider both the local and global information of the NRS to calculate neighborhood approximation space. Table  4 shows the accuracy rate having different values of the radius of the NRS. The proposed threshold mechanism provides better results for all datasets if the value of the radius is 0.002. Results also show that assigning no weight to the radius produces poor results, as it will then only consider the local information for the approximation space. Selecting other weights for radius may produce better results for one dataset but not for all datasets.

Table  5 presents the comparative analysis of the proposed scheme with k NN, Naive Bayes, and C45. The results show that the proposed scheme performs well against other well-known techniques for both the categorical and numerical features space. Naive Bayes and C45 also result in information loss, as these techniques cannot process the hybrid data. So the proposed scheme handles the hybrid data without compromising on the information completeness producing acceptable results. K-fold cross-validation is used to measure the accuracy of the proposed scheme. Each dataset is divided into 10 subsets to use one of the K subsets as the test set and the other K-1 subsets as training sets. Then the average accuracy of all K trials is computed with the advantage of having results regardless of the dataset division.

Conclusion and future work

This work evaluates the existing NRS-based scheme for handling hybrid data sets i.e. numerical and categorical features. The comparative analysis of existing NRS-based schemes shows that there is a need for a generic NRS-based approach to adapt the threshold selection forming neighborhood approximation space. A generalized and ANRS-based scheme is proposed to handle both the categorical and numerical features avoiding information loss and lack of practical meanings. The proposed scheme uses a Euclidean and Levenshtein distance to calculate the upper and lower approximation of NRS for numerical and categorical features respectively. Euclidean and Levenshtein distances have been modified to handle the impact of outliers in calculating the approximation spaces. The proposed scheme defines an adaptive threshold mechanism for calculating neighborhood approximation space regardless of the data set under consideration. A Testbed is developed for real-time behavioral analysis of Parkinson’s patients evaluating the effectiveness of the proposed scheme. The evaluation results show that the proposed scheme provides better accuracy than k NN, C4.5, and Naive Bayes for both the categorical and numerical feature space achieving 95% accuracy on the Parkinson’s dataset. The proposed scheme will be evaluated against the hybrid data set having more than two classes in future work. Additionally, in future work, we aim to explore the following areas; (i) conduct longitudinal studies to track the progression of Parkinson’s disease over time, allowing for a deeper understanding of how behavioral patterns evolve and how interventions may impact disease trajectory, (ii) explore the integration of additional data sources, such as genetic data, imaging studies, and environmental factors, to provide a more comprehensive understanding of Parkinson’s disease etiology and progression, (iii) validate our findings in larger and more diverse patient populations and investigate the feasibility of implementing our proposed approach in clinical settings to support healthcare providers in decision-making processes, (iv) investigate novel biomarkers or physiological signals that may provide additional insights into Parkinson’s disease progression and motor complications, potentially leading to the development of new diagnostic and monitoring tools, and (v) conduct patient-centered outcomes research to better understand the impact of Parkinson’s disease on patients’ quality of life, functional abilities, and overall well-being, with a focus on developing personalized treatment approaches.

Data availability

The datasets used in this study are publicly available at the following links:

Bupa 67 : https://doi.org/10.24432/C54G67 , Sonar 68 : https://doi.org/10.24432/C5T01Q , Mammographic Mass 69 : https://doi.org/10.24432/C53K6Z , Haberman’s Survival 70 : https://doi.org/10.24432/C5XK51 , Credit-g 71 : https://doi.org/10.24432/C5NC77 , Lymmography 73 : https://doi.org/10.24432/C54598 , Splice 74 : https://doi.org/10.24432/C5M888 , Optdigits 75 : https://doi.org/10.24432/C50P49 , Pendigits 76 : https://doi.org/10.1137/1.9781611972825.9 , Pageblocks 77 : https://doi.org/10.24432/C5J590 , Statlog 78 : https://doi.org/10.24432/C55887 , Magic04 79 : https://doi.org/10.1609/aaai.v29i1.9277 .

Gaber, M. M. Scientific Data Mining and Knowledge Discovery Vol. 1 (Springer, 2009).

Google Scholar  

Hajirahimi, Z. & Khashei, M. Weighting approaches in data mining and knowledge discovery: A review. Neural Process. Lett. 55 , 10393–10438 (2023).

Article   Google Scholar  

Kantardzic, M. Data Mining: Concepts, Models, Methods, and Algorithms (Wiley, 2011).

Book   Google Scholar  

Shu, X. & Ye, Y. Knowledge discovery: Methods from data mining and machine learning. Soc. Sci. Res. 110 , 102817 (2023).

Article   PubMed   Google Scholar  

Tan, P.-N., Steinbach, M. & Kumar, V. Introduction to Data Mining (Pearson Education India, 2016).

Khan, S. & Shaheen, M. From data mining to wisdom mining. J. Inf. Sci. 49 , 952–975 (2023).

Engelbrecht, A. P. Computational Intelligence: An Introduction (Wiley, 2007).

Bhateja, V., Yang, X.-S., Lin, J.C.-W. & Das, R. Evolution in computational intelligence. In Evolution (Springer, 2023).

Wei, W., Liang, J. & Qian, Y. A comparative study of rough sets for hybrid data. Inf. Sci. 190 , 1–16 (2012).

Article   ADS   MathSciNet   Google Scholar  

Kumari, N. & Acharjya, D. Data classification using rough set and bioinspired computing in healthcare applications—An extensive review. Multimedia Tools Appl. 82 , 13479–13505 (2023).

Martinez, A. M. & Kak, A. C. PCA versus LDA. IEEE Trans. Pattern Anal. Mach. Intell. 23 , 228–233 (2001).

Brereton, R. G. Principal components analysis with several objects and variables. J. Chemom. 37 (4), e3408 (2023).

Article   CAS   Google Scholar  

De, R. K., Basak, J. & Pal, S. K. Neuro-fuzzy feature evaluation with theoretical analysis. Neural Netw. 12 , 1429–1455 (1999).

Talpur, N. et al. Deep neuro-fuzzy system application trends, challenges, and future perspectives: A systematic survey. Artif. Intell. Rev. 56 , 865–913 (2023).

Jang, J.-S.R., Sun, C.-T. & Mizutani, E. Neuro-fuzzy and soft computing—A computational approach to learning and machine intelligence [book review]. IEEE Trans. Autom. Control 42 , 1482–1484 (1997).

Ouifak, H. & Idri, A. Application of neuro-fuzzy ensembles across domains: A systematic review of the two last decades (2000–2022). Eng. Appl. Artif. Intell. 124 , 106582 (2023).

Jung, T. & Kim, J. A new support vector machine for categorical features. Expert Syst. Appl. 229 , 120449 (2023).

Hu, Q., Xie, Z. & Yu, D. Hybrid attribute reduction based on a novel fuzzy-rough model and information granulation. Pattern Recognit. 40 , 3509–3521 (2007).

Article   ADS   Google Scholar  

Wang, P., He, J. & Li, Z. Attribute reduction for hybrid data based on fuzzy rough iterative computation model. Inf. Sci. 632 , 555–575 (2023).

Yeung, D. S., Chen, D., Tsang, E. C., Lee, J. W. & Xizhao, W. On the generalization of fuzzy rough sets. IEEE Trans. Fuzzy Syst. 13 , 343–361 (2005).

Gao, L., Yao, B.-X. & Li, L.-Q. L-fuzzy generalized neighborhood system-based pessimistic l-fuzzy rough sets and its applications. Soft Comput. 27 , 7773–7788 (2023).

Bhatt, R. B. & Gopal, M. On fuzzy-rough sets approach to feature selection. Pattern Recognit. Lett. 26 , 965–975 (2005).

Dubois, D. & Prade, H. Putting fuzzy sets and rough sets together. Intell. Decis. Support 23 , 203–232 (1992).

Jensen, R. & Shen, Q. Fuzzy-rough sets for descriptive dimensionality reduction. In 2002 IEEE World Congress on Computational Intelligence. 2002 IEEE International Conference on Fuzzy Systems. FUZZ-IEEE’02. Proceedings (Cat. No. 02CH37291) , vol. 1, 29–34 (IEEE, 2002).

Pedrycz, W. & Vukovich, G. Feature analysis through information granulation and fuzzy sets. Pattern Recognit. 35 , 825–834 (2002).

Jensen, R. & Shen, Q. Fuzzy-rough sets assisted attribute selection. IEEE Trans. Fuzzy Syst. 15 , 73–89 (2007).

Shen, Q. & Jensen, R. Selecting informative features with fuzzy-rough sets and its application for complex systems monitoring. Pattern Recognit. 37 , 1351–1363 (2004).

Wang, X., Tsang, E. C., Zhao, S., Chen, D. & Yeung, D. S. Learning fuzzy rules from fuzzy samples based on rough set technique. Inf. Sci. 177 , 4493–4514 (2007).

Article   MathSciNet   Google Scholar  

Wei, W., Liang, J., Qian, Y. & Wang, F. An attribute reduction approach and its accelerated version for hybrid data. In 2009 8th IEEE International Conference on Cognitive Informatics , 167–173 (IEEE, 2009).

Yin, T., Chen, H., Li, T., Yuan, Z. & Luo, C. Robust feature selection using label enhancement and \(\beta \) -precision fuzzy rough sets for multilabel fuzzy decision system. Fuzzy Sets Syst. 461 , 108462 (2023).

Yin, T. et al. Exploiting feature multi-correlations for multilabel feature selection in robust multi-neighborhood fuzzy \(\beta \) covering space. Inf. Fusion 104 , 102150 (2024).

Yin, T. et al. A robust multilabel feature selection approach based on graph structure considering fuzzy dependency and feature interaction. IEEE Trans. Fuzzy Syst. 31 , 4516–4528. https://doi.org/10.1109/TFUZZ.2023.3287193 (2023).

Huang, W., She, Y., He, X. & Ding, W. Fuzzy rough sets-based incremental feature selection for hierarchical classification. IEEE Trans. Fuzzy Syst. https://doi.org/10.1109/TFUZZ.2023.3300913 (2023).

Dong, L., Wang, R. & Chen, D. Incremental feature selection with fuzzy rough sets for dynamic data sets. Fuzzy Sets Syst. 467 , 108503 (2023).

Chakraborty, M. K. & Samanta, P. Fuzzy sets and rough sets: A mathematical narrative. In Fuzzy, Rough and Intuitionistic Fuzzy Set Approaches for Data Handling: Theory and Applications , 1–21 (Springer, 2023).

Wang, Z., Chen, H., Yuan, Z. & Li, T. Fuzzy-rough hybrid dimensionality reduction. Fuzzy Sets Syst. 459 , 95–117 (2023).

Xue, Z.-A., Jing, M.-M., Li, Y.-X. & Zheng, Y. Variable precision multi-granulation covering rough intuitionistic fuzzy sets. Granul. Comput. 8 , 577–596 (2023).

Akram, M., Nawaz, H. S. & Deveci, M. Attribute reduction and information granulation in pythagorean fuzzy formal contexts. Expert Systems Appl. 222 , 119794 (2023).

Hu, M., Guo, Y., Chen, D., Tsang, E. C. & Zhang, Q. Attribute reduction based on neighborhood constrained fuzzy rough sets. Knowl. Based Syst. 274 , 110632 (2023).

Zhang, C., Ding, J., Zhan, J., Sangaiah, A. K. & Li, D. Fuzzy intelligence learning based on bounded rationality in IOMT systems: A case study in Parkinson’s disease. IEEE Trans. Comput. Soc. Syst. 10 , 1607–1621. https://doi.org/10.1109/TCSS.2022.3221933 (2023).

Zhang, C. & Zhang, J. Three-way group decisions with incomplete spherical fuzzy information for treating Parkinson’s disease using IOMT devices. Wireless Communications and Mobile Computing , vol. 2022 (2022).

Jain, P., Tiwari, A. K. & Som, T. Improving financial bankruptcy prediction using oversampling followed by fuzzy rough feature selection via evolutionary search. In Computational Management: Applications of Computational Intelligence in Business Management , 455–471 (Springer, 2021).

Shreevastava, S., Singh, S., Tiwari, A. & Som, T. Different classes ratio and Laplace summation operator based intuitionistic fuzzy rough attribute selection. Iran. J. Fuzzy Syst. 18 , 67–82 (2021).

MathSciNet   Google Scholar  

Shreevastava, S., Tiwari, A. & Som, T. Feature subset selection of semi-supervised data: an intuitionistic fuzzy-rough set-based concept. In Proceedings of International Ethical Hacking Conference 2018: eHaCON 2018, Kolkata, India , 303–315 (Springer, 2019).

Tiwari, A. K., Nath, A., Subbiah, K. & Shukla, K. K. Enhanced prediction for observed peptide count in protein mass spectrometry data by optimally balancing the training dataset. Int. J. Pattern Recognit. Artif. Intell. 31 , 1750040 (2017).

Jain, P., Tiwari, A. K. & Som, T. An intuitionistic fuzzy bireduct model and its application to cancer treatment. Comput. Ind. Eng. 168 , 108124 (2022).

Yin, T., Chen, H., Yuan, Z., Li, T. & Liu, K. Noise-resistant multilabel fuzzy neighborhood rough sets for feature subset selection. Inf. Sci. 621 , 200–226 (2023).

Sang, B., Chen, H., Yang, L., Li, T. & Xu, W. Incremental feature selection using a conditional entropy based on fuzzy dominance neighborhood rough sets. IEEE Trans. Fuzzy Syst. 30 , 1683–1697 (2021).

Xu, J., Meng, X., Qu, K., Sun, Y. & Hou, Q. Feature selection using relative dependency complement mutual information in fitting fuzzy rough set model. Appl. Intell. 53 , 18239–18262 (2023).

Jiang, H., Zhan, J. & Chen, D. Promethee ii method based on variable precision fuzzy rough sets with fuzzy neighborhoods. Artif. Intell. Rev. 54 , 1281–1319 (2021).

Qu, K., Xu, J., Han, Z. & Xu, S. Maximum relevance minimum redundancy-based feature selection using rough mutual information in adaptive neighborhood rough sets. Appl. Intell. 53 , 17727–17746 (2023).

Xu, J., Yuan, M. & Ma, Y. Feature selection using self-information and entropy-based uncertainty measure for fuzzy neighborhood rough set. Complex Intell. Syst. 8 , 287–305 (2022).

Xu, J., Shen, K. & Sun, L. Multi-label feature selection based on fuzzy neighborhood rough sets. Complex Intell. Syst. 8 , 2105–2129 (2022).

Sang, B. et al. Feature selection for dynamic interval-valued ordered data based on fuzzy dominance neighborhood rough set. Knowl. Based Syst. 227 , 107223 (2021).

Wu, W.-Z., Mi, J.-S. & Zhang, W.-X. Generalized fuzzy rough sets. Inf. Sci. 151 , 263–282 (2003).

Gogoi, P., Bhattacharyya, D. K. & Kalita, J. K. A rough set-based effective rule generation method for classification with an application in intrusion detection. Int. J. Secur. Netw. 8 , 61–71 (2013).

Grzymala-Busse, J. W. Knowledge acquisition under uncertainty—A rough set approach. J. Intell. Robot. Syst. 1 , 3–16 (1988).

Jing, S. & She, K. Heterogeneous attribute reduction in noisy system based on a generalized neighborhood rough sets model. World Acad. Sci. Eng. Technol. 75 , 1067–1072 (2011).

Zhu, X., Zhang, Y. & Zhu, Y. Intelligent fault diagnosis of rolling bearing based on kernel neighborhood rough sets and statistical features. J. Mech. Sci. Technol. 26 , 2649–2657 (2012).

Zhao, B.-T. & Jia, X.-F. Neighborhood covering rough set model of fuzzy decision system. Int. J. Comput. Sci. Issues 10 , 51 (2013).

Hou, M.-L. et al. Neighborhood rough set reduction-based gene selection and prioritization for gene expression profile analysis and molecular cancer classification. J Biomed Biotechnol. 2010 , 726413 (2010).

Article   PubMed   PubMed Central   Google Scholar  

He, M.-X. & Qiu, D.-D. A intrusion detection method based on neighborhood rough set. TELKOMNIKA Indones. J. Electr. Eng. 11 , 3736–3741 (2013).

ADS   Google Scholar  

Newman, D. J., Hettich, S., Blake, C. L. & Merz, C. UCI repository of machine learning databases (1998).

Aarsland, D. et al. Parkinson disease-associated cognitive impairment. Nat. Rev. Dis. Primers 7 , 47 (2021).

Lang, A. E. & Lozano, A. M. Parkinson’s disease. N. Engl. J. Med. 339 , 1130–1143 (1998).

Article   CAS   PubMed   Google Scholar  

Engin, M. et al. The classification of human tremor signals using artificial neural network. Expert Syst. Appl. 33 , 754–761 (2007).

Liver Disorders. UCI Machine Learning Repository. https://doi.org/10.24432/C54G67 (1990).

Sejnowski, T. & Gorman, R. Connectionist bench (sonar, mines vs. rocks). UCI Machine Learning Repository. https://doi.org/10.24432/C5T01Q

Elter, M. Mammographic Mass. UCI Machine Learning Repository. https://doi.org/10.24432/C53K6Z (2007).

Haberman, S. Haberman’s Survival. UCI Machine Learning Repository. https://doi.org/10.24432/C5XK51 (1999).

Hofmann, H. Statlog (German Credit Data). UCI Machine Learning Repository. https://doi.org/10.24432/C5NC77 (1994).

Kubat, M., Holte, R. C. & Matwin, S. Machine learning for the detection of oil spills in satellite radar images. Mach. Learn. 30 , 195–215 (1998).

Zwitter, M. & Soklic, M. Lymphography. UCI Machine Learning Repository. https://doi.org/10.24432/C54598 (1988).

Molecular Biology (Splice-junction Gene Sequences). UCI Machine Learning Repository. https://doi.org/10.24432/C5M888 (1992).

Alpaydin, E. & Kaynak, C. Optical Recognition of Handwritten Digits. UCI Machine Learning Repository. https://doi.org/10.24432/C50P49 (1998).

Schubert, E., Wojdanowski, R., Zimek, A. & Kriegel, H.-P. On evaluation of outlier rankings and outlier scores. In Proceedings of the 2012 SIAM International Conference on Data Mining , 1047–1058 (SIAM, 2012).

Malerba, D. Page Blocks Classification. UCI Machine Learning Repository. https://doi.org/10.24432/C5J590 (1995).

Srinivasan, A. Statlog (Landsat Satellite). UCI Machine Learning Repository. https://doi.org/10.24432/C55887 (1993).

Rossi, R. A. & Ahmed, N. K. The network data repository with interactive graph analytics and visualization. In AAAI (2015).

Download references

Acknowledgements

This research was funded by the European University of Atlantic.

Author information

Authors and affiliations.

Department of Computer Science, COMSATS University Islamabad, Lahore Campus, Lahore, 54000, Pakistan

Imran Raza, Muhammad Hasan Jamal, Rizwan Qureshi & Abdul Karim Shahid

Universidad Europea del Atlántico, Isabel Torres 21, 39011, Santander, Spain

Angel Olider Rojas Vistorte

Universidad Internacional Iberoamericana Campeche, 24560, Campeche, Mexico

Universidade Internacional do Cuanza, Cuito, Bié, Angola

Department of Information and Communication Engineering, Yeungnam University, Gyeongsan-si, Gyeongsangbuk-do, 38541, South Korea

Md Abdus Samad & Imran Ashraf

You can also search for this author in PubMed   Google Scholar

Contributions

Imran Raza: Conceptualization, Formal analysis, Writing—original draft; Muhammad Hasan Jamal: Conceptualization, Data curation, Writing—original draft; Rizwan Qureshi: Data curation, Formal analysis, Methodology; Abdul Karim Shahid: Project administration, Software, Visualization; Angel Olider Rojas Vistorte: Funding acquisition, Investigation, Project administration; Md Abdus Samad: Investigation, Software, Resources; Imran Ashraf: Supervision, Validation, Writing —review and editing. All authors reviewed the manuscript and approved it.

Corresponding authors

Correspondence to Md Abdus Samad or Imran Ashraf .

Ethics declarations

Competing interests.

The authors declare no competing interests.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Raza, I., Jamal, M.H., Qureshi, R. et al. Adaptive neighborhood rough set model for hybrid data processing: a case study on Parkinson’s disease behavioral analysis. Sci Rep 14 , 7635 (2024). https://doi.org/10.1038/s41598-024-57547-4

Download citation

Received : 01 October 2023

Accepted : 19 March 2024

Published : 01 April 2024

DOI : https://doi.org/10.1038/s41598-024-57547-4

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

By submitting a comment you agree to abide by our Terms and Community Guidelines . If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

meaning of content analysis in research

IMAGES

  1. Content Analysis For Research

    meaning of content analysis in research

  2. 10 Content Analysis Examples (2024)

    meaning of content analysis in research

  3. Content Analysis

    meaning of content analysis in research

  4. What it is Content Analysis and How Can you Use it in Research

    meaning of content analysis in research

  5. PPT

    meaning of content analysis in research

  6. Content Analysis For Research

    meaning of content analysis in research

VIDEO

  1. Definitions / Levels of Measurement . 3/10 . Quantitative Analysis . 21st Sep. 2020 . #AE-QN/QL-201

  2. Content Analysis || Research Methodology || Dr.vivek pragpura || sociology with vivek ||

  3. Content Analysis

  4. Content Analysis Method || Content Analysis Method in hindi || Content Analysis Research Method

  5. Guide to Data Analytics for Social Media Monitoring Webinar Walkthrough

  6. How to do content analysis in Excel and the concept of content analysis ( Amharic tutorial)

COMMENTS

  1. Content Analysis

    Content analysis is a research method used to identify patterns in recorded communication. To conduct content analysis, you systematically collect data from a set of texts, which can be written, oral, or visual: Books, newspapers and magazines. Speeches and interviews. Web content and social media posts. Photographs and films.

  2. Content Analysis

    Definition: Content analysis is a research method used to analyze and interpret the characteristics of various forms of communication, such as text, images, or audio. It involves systematically analyzing the content of these materials, identifying patterns, themes, and other relevant features, and drawing inferences or conclusions based on the ...

  3. Content Analysis Method and Examples

    Content analysis is a research tool used to determine the presence of certain words, themes, or concepts within some given qualitative data (i.e. text). Using content analysis, researchers can quantify and analyze the presence, meanings, and relationships of such certain words, themes, or concepts. ... Definition 3: "A research technique for ...

  4. Content Analysis

    Content analysis is a research method used to identify patterns in recorded communication. To conduct content analysis, ... In addition, content analysis can be used to make qualitative inferences by analysing the meaning and semantic relationship of words and concepts.

  5. A hands-on guide to doing content analysis

    Some content analysis sources warn researchers against short meaning units, claiming that this can lead to fragmentation . However, our personal experience as research supervisors has shown us that a greater problem for the novice is basing analysis on meaning units that are too large and include many meanings which are then lost in the ...

  6. Content analysis

    Content analysis is the study of documents and communication artifacts, which might be texts of various formats, pictures, audio or video. Social scientists use content analysis to examine patterns in communication in a replicable and systematic manner. One of the key advantages of using content analysis to analyse social phenomena is their non-invasive nature, in contrast to simulating social ...

  7. What is Content Analysis? Uses, Types & Advantages

    Content analysis is a research method used to identify the presence of various concepts, words, and themes in different texts. Two types of content analysis exist: conceptual analysis and relational analysis. In the former, researchers determine whether and how frequently certain concepts appear in a text. In relational analysis, researchers ...

  8. Content Analysis

    Content analysis was a method originally developed to analyze mass media "messages" in an age of radio and newspaper print, well before the digital age. Unfortunately, CTA struggles to break free of its origins and continues to be associated with the quantitative analysis of "communication.".

  9. Content Analysis

    Content analysis is a research method that has been used increasingly in social and health research, including quality of life and well-being. Content analysis has been generally defined as a systematic technique for compressing many words of text into fewer content categories based on explicit rules of coding (Berelson 1952; Krippendorff 1980 ...

  10. How to do a content analysis [7 steps]

    In research, content analysis is the process of analyzing content and its features with the aim of identifying patterns and the presence of words, themes, and concepts within the content. Simply put, content analysis is a research method that aims to present the trends, patterns, concepts, and ideas in content as objective, quantitative or ...

  11. (PDF) Content Analysis: a short overview

    Content analysis (CA) is a research methodology to make sense of the (often unstructured) content of messages - b e they texts, images, sym bols or audio data. In s hort it could be sa id to

  12. Qualitative Content Analysis 101 (+ Examples)

    Content analysis is a qualitative analysis method that focuses on recorded human artefacts such as manuscripts, voice recordings and journals. Content analysis investigates these written, spoken and visual artefacts without explicitly extracting data from participants - this is called unobtrusive research. In other words, with content ...

  13. Introduction

    Abstract. This chapter offers an inclusive definition of content analysis. This helps in clarifying some key terms and concepts. Three approaches to content analysis are introduced and defined briefly: basic content analysis, interpretive content analysis, and qualitative content analysis. Long-standing differences between quantitative and ...

  14. Demystifying Content Analysis

    Qualitative Content Analysis. Content analysis rests on the assumption that texts are a rich data source with great potential to reveal valuable information about particular phenomena. 8 It is the process of considering both the participant and context when sorting text into groups of related categories to identify similarities and differences, patterns, and associations, both on the surface ...

  15. How to plan and perform a qualitative study using content analysis

    Quantitative content analysis has its origin in media research, while qualitative content analysis has its roots originally in social research. Despite this, none of the forms of content analysis are linked to any particular science. Consequently, there are no specific conceptions of meaning, and the concepts used are universal.

  16. (PDF) Content Analysis: A Flexible Methodology

    Abstract. Content analysis is a highly fl exible research method that has been. widely used in library and infor mation science (LIS) studies with. varying research goals and objectives. The ...

  17. Content Analysis

    Content analysis is a research tool used to determine the presence of certain words or concepts within texts or sets of texts. Researchers quantify and analyze the presence, meanings and relationships of such words and concepts, then make inferences about the messages within the texts, the writer (s), the audience, and even the culture and time ...

  18. Content Analysis

    Content Analysis. Content analysis is defined as. "the systematic reading of a body of texts, images, and symbolic matter, not necessarily from an author's or user's perspective" (Krippendorff, 2004). Content analysis is distinguished from other kinds of social science research in that it does not require the collection of data from people.

  19. Reflexive Content Analysis: An Approach to Qualitative Data Analysis

    If the goal of the analysis is the reduction and description of a dataset in relation to a research question about manifest content, further analysis is unnecessary and may even be counterproductive. It would be unhelpful for data reduction purposes to have an identical code for a distinct concept in multiple places.

  20. Qualitative Content Analysis

    Qualitative content analysis is one of the several qualita-tive methods currently available for analyzing data and inter-preting its meaning (Schreier, 2012). As a research method, it represents a systematic and objective means of describing and quantifying phenomena (Downe-Wamboldt, 1992; Schreier, 2012).

  21. (PDF) Content Analysis

    Content analysis is a widely used qualitative research technique. Rather than being a single method, current applications of content analysis show three distinct approaches: conventional, directed ...

  22. A hands-on guide to doing content analysis

    Some content analysis sources warn researchers against short meaning units, claiming that this can lead to fragmentation [1]. However, our personal experience as research supervisors has shown us that a greater problem for the novice is basing analysis on meaning units that are too large and include many meanings which are then lost in the ...

  23. Who is Hispanic?

    ABOUT PEW RESEARCH CENTER Pew Research Center is a nonpartisan fact tank that informs the public about the issues, attitudes and trends shaping the world. It conducts public opinion polling, demographic research, media content analysis and other empirical social science research. Pew Research Center does not take policy positions.

  24. Content Analysis

    Content analysis is a research method that has been used increasingly in social and health research, including quality of life and well-being. Content analysis has been generally defined as a systematic technique for compressing many words of text into fewer content categories based on explicit rules of coding (Berelson, 1952; Krippendorff, 1980; Weber, 1990).

  25. "I am in favour of organ donation, but I feel you should opt-in

    Background In May 2020, England moved to an opt-out organ donation system, meaning adults are presumed to be an organ donor unless within an excluded group or have opted-out. This change aims to improve organ donation rates following brain or circulatory death. Healthcare staff in the UK are supportive of organ donation, however, both healthcare staff and the public have raised concerns and ...

  26. Adaptive neighborhood rough set model for hybrid data ...

    Table 2 gives a comparison of existing rough set-based schemes for quantitative and qualitative analysis. The comparative parameters include handling hybrid data, generalized NRS, attribute ...

  27. ExploreTurns: A web tool for the exploration and analysis of ...

    The most common type of protein secondary structure after the alpha helix and beta sheet is the four-residue beta turn, which plays many key structural and functional roles. Existing tools for the study of beta turns operate almost exclusively in backbone dihedral-angle (Ramachandran) space, which presents challenges for the visualization, comparison and analysis of the wide range of turn ...