College & Research Libraries News  ( C&RL News ) is the official newsmagazine and publication of record of the Association of College & Research Libraries,  providing articles on the latest trends and practices affecting academic and research libraries.

C&RL News  became an online-only publication beginning with the January 2022 issue.

C&RL News  Reader Survey

Give us your feedback in the 2024  C&RL News   reader survey ! The survey asks a series of questions today to gather your thoughts on the contents and presentation of the magazine and should only take approximately 5-7 minutes to complete. Thank you for taking the time to provide your feedback and suggestions for  C&RL News , we greatly appreciate and value your input.

Research Assistant and Lecturer University of Missouri-Columbia

structured observation qualitative research

ALA JobLIST

Advertising Information

  • Preparing great speeches: A 10-step approach (215885 views)
  • The American Civil War: A collection of free online primary sources (200261 views)
  • 2018 top trends in academic libraries: A review of the trends and issues affecting academic libraries in higher education (77674 views)

ACRL College & Research Libraries News

Association of College & Research Libraries

Research forum: structured observation: how it works.

By Jack Glazier Research Assistant and Lecturer University of Missouri-Columbia

The project described in this article was originally reported at the ALA Library Research Round Table’s Research Forum in Dallas and again at the College and University Libraries Section of the Kansas Library Association in Topeka in October 1984. The research project 1 itself was designed and implemented by Robert Grover, dean of the School of Library and Information Management, Emporia (Kansas) State University, and this author. The project was planned 1) to test structured observation as a research methodology which can be used for research in schools preparing library and information professionals, and 2) to determine the information use patterns of a specific target group as a study of information transfer theory.

Information flow

Greer has developed a model 2 in which the transfer of information assumes identifiable patterns influenced by the environment encompassing the social roles of the individual information user. That environment includes patterns of information generation, dissemination, and utilization, as well as a specialized vocabulary, and pertinent names and places singular to the individual’s subsociety.

Although Greer’s information transfer model provided a theoretical suprastructure, research was still needed to detail more clearly the patterns of information transfer for various subsocieties. Appropriate and innovative methodologies are essential for research of this type. Consequently, one early objective was the development of a methodology for research designed to map the patterns of information transfer for specific subsocieties that would be as workable for graduate students and faculty as for practitioners in the field.

Structured observation

The primary methodology selected was structured observation. Structured observation is a qualitative research methodology that has been used by the social sciences for several years. It is a methodology in which an event or series of events is observed in its natural setting and recorded by an independent researcher. The observations are structured in the sense that pre-determined categories are used to guide the recording process. It is a methodology that, although not used to our knowledge for library research in this country before, seemed to us to be particularly well suited for information transfer research as we had envisioned it.

As a qualitative research methodology, structured observation was desirable for the study of information transfer theory for several reasons, especially its flexibility that allowed us to change the length of the observation periods from what others had previously used. Structured observation could yield specific types of data from an unfamiliar and unrehearsed sequence of activities.

Structured observation is also systematic and comprehensive, allowing an observer to record data in predetermined increments during a specified period of time. A final consideration for our selection was that structured observation had been recently utilized in a research project by Hale in the field of public administration 3 to investigate city managers in California as interactive information agents.

The subjects

The subsociety selected for our investigation was also city managers. We chose them in part because of their role in the Hale study, which employed the same basic methodology that we intended to use. Although our study was not to be an exact replication of the Hale study, we believed that there were enough similarities that we would be able to use it to help validate our version of structured observation. Another more pragmatic reason for our selection was the accessibility of the group. There were cities employing the city manager/city commission form of government geographically close enough to make travel feasible on our limited budget.

Two consultants who were acknowledged experts in the area of public administration helped in the actual process of selecting the subjects. Both were asked to submit a ranked list of successful city managers working in Kansas. The consultants recommended a total of ten prospective subjects. Their recommendations were then merged in rank order and the prospective subjects were contacted by letter and phone.

For this study we needed five subjects willing to commit themselves, their staffs, their offices, and their time to our project. We decided to contact subjects one after another until we were able to find five willing to make this investment. Only in three instances were we unable to use a recommended subject. Time involvement was the reason most often given by subjects not wishing to participate in the study.

After they had consented to take part in the project, we visited and interviewed each of the five managers and their staffs prior to beginning the observation sessions. At the meetings we explained more fully the project and its methodology, asked them for candid answers, and conducted a pre-study interview regarding their perceived information sources. We also scheduled an interview with the manager’s secretary at this time, and requested additional data such as the manager’s vitae, a copy of his work calendar for the past month, and a copy of the city’s organizational chart. In each instance, managers were assured that the focus of the study was the job, not the individual; the basic similarities in information use, not the differences; and the actual processes involved in information use in relation to everyday on-the-job activities. Finally, we set times and dates for the actual observation sessions.

The sessions

Initially there were to be five observation sessions, each four hours in length. Sessions were planned for consecutive work days, alternating mornings and and afternoons. However, as the observations proceeded we varied the design of the methodology. Occasionally unforeseen situations would arise involving the manager’s schedule that would require alterations in the time frame. For example, if a manager was unable to be in his office for a morning session, the session could often be rescheduled for the afternoon, even though it might mean more afternoon sessions than were originally planned. In one instance, the manager had a late morning meeting that extended into the afternoon resulting in an observation session of six hours instead of four. After looking at the data gathered from these sessions we found that these changes did not appear to affect seriously the continuity of the data. This led us to design variation into future observations as a further test of the methodology from the standpoint of both the tools and the actual observer.

In fact what we found was that although the tools held up fine, it was the observer that suffered. The longer days and back-to-back sessions, coupled with the stress of travel, made concentration difficult for the observer at the end of a long day. Varia- tions in the length of the observation sessions appeared to affect the data quantitatively, not qualitatively.

For the actual observations the only tools that were taken into the sessions were two mechanical pencils, a watch, a clipboard, and the recording forms. Two types of forms were employed. The first, called a chronological form, was used to record the moment by moment activities of the subject. Its design was based on a communication model involving sender, receiver, and message. Categories for recording the data included: time of the activity; description of the activity (meeting, phone, conversation, etc.); the medium of communication (telephone, direct personal communication, etc.); description of the apparent purposes and issues of the communication (this was often verified with the city manager during quiet times); and the location of the communication (this category was necessary because not all communication took place in the manager’s office).

The second form was similar to the chronological form, but with the addition of a category for recording the attention given a particular item (skimmed, read, studied, etc.) and one that gave some indication of the disposition of a specific item (filed, sent on, discarded, etc.). The forms were designed to aid in taking notes that were as accurate and efficient as possible.

For structured observation to yield valid results, the observer has to be cautious not to affect the behavior of the subject. We operated on the principle that the more inconspicuous the observer, the less effect his presence would have.

We found several ways that seemed to work in making an observer less obtrusive. One way was for the observer to limit eye contact with the subjects as much as possible during meetings or conversations. By not establishing eye contact from the start, the subjects soon became involved in the business at hand and forgot about the observer’s presence. Another method was for the observer to keep his head down with attention directed strictly on the forms. When I used this technique in our observations it helped in several ways. It took care of the problem of eye contact and in effect took me out of the meetings. In addition, by concentrating solely on notes during meetings, my attention was easier to control and my notes were more detailed and complete.

Another aspect central to controlling the observer’s impact on the subject and the environment was positioning the observer in the manager’s office. We found that the best location for the observer was behind and slightly to the right of the subject. This location allowed the observer a clear view of the subject’s desk as well as the entire room. It also placed the observer close enough to the subject to be able to hear and note phone conversations.

Permission had been given for the observer to have access to the subject’s phone calls and mail. Provisions were made for the observer to leave the room if the manager felt the subject being discussed was sensitive. This only happened once and then only for a few minutes. On two occasions the observer was asked not to divulge the specifics of a conversation. In each case the conversation involved the recruitment of new businesses to the community. With these few exceptions, the observer was able to log detailed and comprehensive data.

One problem faced by the observer was long periods of inactivity on the part of the subject. Maintaining attention and yet remaining as inobtrusive as possible during these periods was difficult. On one occasion a manager spent nearly two hours preparing a presentation for the city commission meeting. A majority of the time was spent writing with an occasional recourse to a reference book located on his desk. As the observer in this case, I found remaining alert and attentive without shuffling papers or shifting positions for two hours a formidable task. This experience led us to conclude that data gathering in similar situations would be best accomplished using an alternative methodology such as interviews.

Conversely, a subject that is overly active can also present problems for the researcher. Subjects who are constantly on the move or involved in a large number of impromptu information exchanges presented difficulties in accurately recording data. One manager we observed spent a large amount of time visiting department heads in their offices. Often this manager would meet individuals in the hallways while moving from one office to another. The ensuing conversations were often short in duration but substantive in content. Note-taking while accompanying a subject along a hallway was difficult. The result was that the observer would have extremely sketchy notes on an encounter that in some instances was a significant aspect of a subject’s information transfer pattern. In this situation a solution might be for the observer to carry and use a tape recorder.

Another instance where a high degree of activity became a problem was in meetings. Meetings involving several participants created difficulties for the observer because of the amount of data and the rate at which it was generated. In an effort to deal with this type of situation we tested the use of two observers. We found this worked very well. By putting the notes together we had a complete record of a fast-moving, complex meeting. Another alternative would be to tape record or videotape the meeting if only one observer were available for the session.

When problems arose during observations the observer made a note of the situation so that it could be discussed during a debriefing session. Debriefing sessions were held as soon as possible after each session. They were initially designed for the two principals in the project (Robert Grover and myself) to discuss difficulties encountered during an observation and to make necessary adjustments. We also reviewed highlights of the day while checking observation notes for clarity.

Our analysis of the structured observation method showed that it permitted the researcher to gather complete data on complex information interactions. It yielded data with sufficient context to remain fresh, thus allowing researchers more time for analysis. In most instances the data was gathered with relationships intact, resulting in clearer explanations. The data clearly defined the information transfer patterns of a specific subsociety— city managers. The success of this project relied to a large degree on the flexibility of the methodology.

Today not only must academic librarians be aware of a wide range of research methodologies to support the research being done by students and faculty, but they also are finding that research and publication have become necessary prerequisites for professional advancement. Unfortunately librarians must deal with time constraints which limit research opportunities.

One consequence of this project is that it validated a methodology that is responsive to the research needs of practitioners. Specifically, we found that structured observation is appropriate for use by academic librarians, when used in conjunction with interviews or other data gathering techniques, to determine the information behavior and needs of specific client groups. It is particularly effective for gathering data about client groups for which little is known.

However, for academic librarians the strength of structured observation is its adaptability to restrictive time limitations as well as its wide range of applications. It is a methodology well suited for observing classroom instruction, faculty meetings, curriculum meetings, and the individual work of specific client groups.

  • Robert Grover and Jack Glazier, “Information Transfer in City Government,” Public Library Quarterly 5 (Winter 1984): 9-27.
  • Roger C. Greer, “Information Transfer: A Conceptual Model for Librarianship, Information Science and Information Management with Implications for Library Education,” Great Plains Libraries 20(1982):2-15.
  • Martha L. Hale, A Structured Observation Study of the Nature of City Managers (Ph.D. dissertation, University of Southern California, 1983).

Article Views (Last 12 Months)

Contact ACRL for article usage statistics from 2010-April 2017.

Article Views (By Year/Month)

© 2024 Association of College and Research Libraries , a division of the American Library Association

Print ISSN: 0099-0086 | Online ISSN: 2150-6698

ALA Privacy Policy

ISSN: 2150-6698

Logo for Kwantlen Polytechnic University

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

Non-Experimental Research

32 Observational Research

Learning objectives.

  • List the various types of observational research methods and distinguish between each.
  • Describe the strengths and weakness of each observational research method. 

What Is Observational Research?

The term observational research is used to refer to several different types of non-experimental studies in which behavior is systematically observed and recorded. The goal of observational research is to describe a variable or set of variables. More generally, the goal is to obtain a snapshot of specific characteristics of an individual, group, or setting. As described previously, observational research is non-experimental because nothing is manipulated or controlled, and as such we cannot arrive at causal conclusions using this approach. The data that are collected in observational research studies are often qualitative in nature but they may also be quantitative or both (mixed-methods). There are several different types of observational methods that will be described below.

Naturalistic Observation

Naturalistic observation  is an observational method that involves observing people’s behavior in the environment in which it typically occurs. Thus naturalistic observation is a type of field research (as opposed to a type of laboratory research). Jane Goodall’s famous research on chimpanzees is a classic example of naturalistic observation. Dr.  Goodall spent three decades observing chimpanzees in their natural environment in East Africa. She examined such things as chimpanzee’s social structure, mating patterns, gender roles, family structure, and care of offspring by observing them in the wild. However, naturalistic observation  could more simply involve observing shoppers in a grocery store, children on a school playground, or psychiatric inpatients in their wards. Researchers engaged in naturalistic observation usually make their observations as unobtrusively as possible so that participants are not aware that they are being studied. Such an approach is called disguised naturalistic observation .  Ethically, this method is considered to be acceptable if the participants remain anonymous and the behavior occurs in a public setting where people would not normally have an expectation of privacy. Grocery shoppers putting items into their shopping carts, for example, are engaged in public behavior that is easily observable by store employees and other shoppers. For this reason, most researchers would consider it ethically acceptable to observe them for a study. On the other hand, one of the arguments against the ethicality of the naturalistic observation of “bathroom behavior” discussed earlier in the book is that people have a reasonable expectation of privacy even in a public restroom and that this expectation was violated. 

In cases where it is not ethical or practical to conduct disguised naturalistic observation, researchers can conduct  undisguised naturalistic observation where the participants are made aware of the researcher presence and monitoring of their behavior. However, one concern with undisguised naturalistic observation is  reactivity. Reactivity refers to when a measure changes participants’ behavior. In the case of undisguised naturalistic observation, the concern with reactivity is that when people know they are being observed and studied, they may act differently than they normally would. This type of reactivity is known as the Hawthorne effect . For instance, you may act much differently in a bar if you know that someone is observing you and recording your behaviors and this would invalidate the study. So disguised observation is less reactive and therefore can have higher validity because people are not aware that their behaviors are being observed and recorded. However, we now know that people often become used to being observed and with time they begin to behave naturally in the researcher’s presence. In other words, over time people habituate to being observed. Think about reality shows like Big Brother or Survivor where people are constantly being observed and recorded. While they may be on their best behavior at first, in a fairly short amount of time they are flirting, having sex, wearing next to nothing, screaming at each other, and occasionally behaving in ways that are embarrassing.

Participant Observation

Another approach to data collection in observational research is participant observation. In  participant observation , researchers become active participants in the group or situation they are studying. Participant observation is very similar to naturalistic observation in that it involves observing people’s behavior in the environment in which it typically occurs. As with naturalistic observation, the data that are collected can include interviews (usually unstructured), notes based on their observations and interactions, documents, photographs, and other artifacts. The only difference between naturalistic observation and participant observation is that researchers engaged in participant observation become active members of the group or situations they are studying. The basic rationale for participant observation is that there may be important information that is only accessible to, or can be interpreted only by, someone who is an active participant in the group or situation. Like naturalistic observation, participant observation can be either disguised or undisguised. In disguised participant observation , the researchers pretend to be members of the social group they are observing and conceal their true identity as researchers.

In a famous example of disguised participant observation, Leon Festinger and his colleagues infiltrated a doomsday cult known as the Seekers, whose members believed that the apocalypse would occur on December 21, 1954. Interested in studying how members of the group would cope psychologically when the prophecy inevitably failed, they carefully recorded the events and reactions of the cult members in the days before and after the supposed end of the world. Unsurprisingly, the cult members did not give up their belief but instead convinced themselves that it was their faith and efforts that saved the world from destruction. Festinger and his colleagues later published a book about this experience, which they used to illustrate the theory of cognitive dissonance (Festinger, Riecken, & Schachter, 1956) [1] .

In contrast with undisguised participant observation ,  the researchers become a part of the group they are studying and they disclose their true identity as researchers to the group under investigation. Once again there are important ethical issues to consider with disguised participant observation.  First no informed consent can be obtained and second deception is being used. The researcher is deceiving the participants by intentionally withholding information about their motivations for being a part of the social group they are studying. But sometimes disguised participation is the only way to access a protective group (like a cult). Further, disguised participant observation is less prone to reactivity than undisguised participant observation. 

Rosenhan’s study (1973) [2]   of the experience of people in a psychiatric ward would be considered disguised participant observation because Rosenhan and his pseudopatients were admitted into psychiatric hospitals on the pretense of being patients so that they could observe the way that psychiatric patients are treated by staff. The staff and other patients were unaware of their true identities as researchers.

Another example of participant observation comes from a study by sociologist Amy Wilkins on a university-based religious organization that emphasized how happy its members were (Wilkins, 2008) [3] . Wilkins spent 12 months attending and participating in the group’s meetings and social events, and she interviewed several group members. In her study, Wilkins identified several ways in which the group “enforced” happiness—for example, by continually talking about happiness, discouraging the expression of negative emotions, and using happiness as a way to distinguish themselves from other groups.

One of the primary benefits of participant observation is that the researchers are in a much better position to understand the viewpoint and experiences of the people they are studying when they are a part of the social group. The primary limitation with this approach is that the mere presence of the observer could affect the behavior of the people being observed. While this is also a concern with naturalistic observation, additional concerns arise when researchers become active members of the social group they are studying because that they may change the social dynamics and/or influence the behavior of the people they are studying. Similarly, if the researcher acts as a participant observer there can be concerns with biases resulting from developing relationships with the participants. Concretely, the researcher may become less objective resulting in more experimenter bias.

Structured Observation

Another observational method is structured observation . Here the investigator makes careful observations of one or more specific behaviors in a particular setting that is more structured than the settings used in naturalistic or participant observation. Often the setting in which the observations are made is not the natural setting. Instead, the researcher may observe people in the laboratory environment. Alternatively, the researcher may observe people in a natural setting (like a classroom setting) that they have structured some way, for instance by introducing some specific task participants are to engage in or by introducing a specific social situation or manipulation.

Structured observation is very similar to naturalistic observation and participant observation in that in all three cases researchers are observing naturally occurring behavior; however, the emphasis in structured observation is on gathering quantitative rather than qualitative data. Researchers using this approach are interested in a limited set of behaviors. This allows them to quantify the behaviors they are observing. In other words, structured observation is less global than naturalistic or participant observation because the researcher engaged in structured observations is interested in a small number of specific behaviors. Therefore, rather than recording everything that happens, the researcher only focuses on very specific behaviors of interest.

Researchers Robert Levine and Ara Norenzayan used structured observation to study differences in the “pace of life” across countries (Levine & Norenzayan, 1999) [4] . One of their measures involved observing pedestrians in a large city to see how long it took them to walk 60 feet. They found that people in some countries walked reliably faster than people in other countries. For example, people in Canada and Sweden covered 60 feet in just under 13 seconds on average, while people in Brazil and Romania took close to 17 seconds. When structured observation  takes place in the complex and even chaotic “real world,” the questions of when, where, and under what conditions the observations will be made, and who exactly will be observed are important to consider. Levine and Norenzayan described their sampling process as follows:

“Male and female walking speed over a distance of 60 feet was measured in at least two locations in main downtown areas in each city. Measurements were taken during main business hours on clear summer days. All locations were flat, unobstructed, had broad sidewalks, and were sufficiently uncrowded to allow pedestrians to move at potentially maximum speeds. To control for the effects of socializing, only pedestrians walking alone were used. Children, individuals with obvious physical handicaps, and window-shoppers were not timed. Thirty-five men and 35 women were timed in most cities.” (p. 186).

Precise specification of the sampling process in this way makes data collection manageable for the observers, and it also provides some control over important extraneous variables. For example, by making their observations on clear summer days in all countries, Levine and Norenzayan controlled for effects of the weather on people’s walking speeds.  In Levine and Norenzayan’s study, measurement was relatively straightforward. They simply measured out a 60-foot distance along a city sidewalk and then used a stopwatch to time participants as they walked over that distance.

As another example, researchers Robert Kraut and Robert Johnston wanted to study bowlers’ reactions to their shots, both when they were facing the pins and then when they turned toward their companions (Kraut & Johnston, 1979) [5] . But what “reactions” should they observe? Based on previous research and their own pilot testing, Kraut and Johnston created a list of reactions that included “closed smile,” “open smile,” “laugh,” “neutral face,” “look down,” “look away,” and “face cover” (covering one’s face with one’s hands). The observers committed this list to memory and then practiced by coding the reactions of bowlers who had been videotaped. During the actual study, the observers spoke into an audio recorder, describing the reactions they observed. Among the most interesting results of this study was that bowlers rarely smiled while they still faced the pins. They were much more likely to smile after they turned toward their companions, suggesting that smiling is not purely an expression of happiness but also a form of social communication.

In yet another example (this one in a laboratory environment), Dov Cohen and his colleagues had observers rate the emotional reactions of participants who had just been deliberately bumped and insulted by a confederate after they dropped off a completed questionnaire at the end of a hallway. The confederate was posing as someone who worked in the same building and who was frustrated by having to close a file drawer twice in order to permit the participants to walk past them (first to drop off the questionnaire at the end of the hallway and once again on their way back to the room where they believed the study they signed up for was taking place). The two observers were positioned at different ends of the hallway so that they could read the participants’ body language and hear anything they might say. Interestingly, the researchers hypothesized that participants from the southern United States, which is one of several places in the world that has a “culture of honor,” would react with more aggression than participants from the northern United States, a prediction that was in fact supported by the observational data (Cohen, Nisbett, Bowdle, & Schwarz, 1996) [6] .

When the observations require a judgment on the part of the observers—as in the studies by Kraut and Johnston and Cohen and his colleagues—a process referred to as   coding is typically required . Coding generally requires clearly defining a set of target behaviors. The observers then categorize participants individually in terms of which behavior they have engaged in and the number of times they engaged in each behavior. The observers might even record the duration of each behavior. The target behaviors must be defined in such a way that guides different observers to code them in the same way. This difficulty with coding illustrates the issue of interrater reliability, as mentioned in Chapter 4. Researchers are expected to demonstrate the interrater reliability of their coding procedure by having multiple raters code the same behaviors independently and then showing that the different observers are in close agreement. Kraut and Johnston, for example, video recorded a subset of their participants’ reactions and had two observers independently code them. The two observers showed that they agreed on the reactions that were exhibited 97% of the time, indicating good interrater reliability.

One of the primary benefits of structured observation is that it is far more efficient than naturalistic and participant observation. Since the researchers are focused on specific behaviors this reduces time and expense. Also, often times the environment is structured to encourage the behaviors of interest which again means that researchers do not have to invest as much time in waiting for the behaviors of interest to naturally occur. Finally, researchers using this approach can clearly exert greater control over the environment. However, when researchers exert more control over the environment it may make the environment less natural which decreases external validity. It is less clear for instance whether structured observations made in a laboratory environment will generalize to a real world environment. Furthermore, since researchers engaged in structured observation are often not disguised there may be more concerns with reactivity.

Case Studies

A  case study   is an in-depth examination of an individual. Sometimes case studies are also completed on social units (e.g., a cult) and events (e.g., a natural disaster). Most commonly in psychology, however, case studies provide a detailed description and analysis of an individual. Often the individual has a rare or unusual condition or disorder or has damage to a specific region of the brain.

Like many observational research methods, case studies tend to be more qualitative in nature. Case study methods involve an in-depth, and often a longitudinal examination of an individual. Depending on the focus of the case study, individuals may or may not be observed in their natural setting. If the natural setting is not what is of interest, then the individual may be brought into a therapist’s office or a researcher’s lab for study. Also, the bulk of the case study report will focus on in-depth descriptions of the person rather than on statistical analyses. With that said some quantitative data may also be included in the write-up of a case study. For instance, an individual’s depression score may be compared to normative scores or their score before and after treatment may be compared. As with other qualitative methods, a variety of different methods and tools can be used to collect information on the case. For instance, interviews, naturalistic observation, structured observation, psychological testing (e.g., IQ test), and/or physiological measurements (e.g., brain scans) may be used to collect information on the individual.

HM is one of the most notorious case studies in psychology. HM suffered from intractable and very severe epilepsy. A surgeon localized HM’s epilepsy to his medial temporal lobe and in 1953 he removed large sections of his hippocampus in an attempt to stop the seizures. The treatment was a success, in that it resolved his epilepsy and his IQ and personality were unaffected. However, the doctors soon realized that HM exhibited a strange form of amnesia, called anterograde amnesia. HM was able to carry out a conversation and he could remember short strings of letters, digits, and words. Basically, his short term memory was preserved. However, HM could not commit new events to memory. He lost the ability to transfer information from his short-term memory to his long term memory, something memory researchers call consolidation. So while he could carry on a conversation with someone, he would completely forget the conversation after it ended. This was an extremely important case study for memory researchers because it suggested that there’s a dissociation between short-term memory and long-term memory, it suggested that these were two different abilities sub-served by different areas of the brain. It also suggested that the temporal lobes are particularly important for consolidating new information (i.e., for transferring information from short-term memory to long-term memory).

QR code for Hippocampus & Memory video

The history of psychology is filled with influential cases studies, such as Sigmund Freud’s description of “Anna O.” (see Note 6.1 “The Case of “Anna O.””) and John Watson and Rosalie Rayner’s description of Little Albert (Watson & Rayner, 1920) [7] , who allegedly learned to fear a white rat—along with other furry objects—when the researchers repeatedly made a loud noise every time the rat approached him.

The Case of “Anna O.”

Sigmund Freud used the case of a young woman he called “Anna O.” to illustrate many principles of his theory of psychoanalysis (Freud, 1961) [8] . (Her real name was Bertha Pappenheim, and she was an early feminist who went on to make important contributions to the field of social work.) Anna had come to Freud’s colleague Josef Breuer around 1880 with a variety of odd physical and psychological symptoms. One of them was that for several weeks she was unable to drink any fluids. According to Freud,

She would take up the glass of water that she longed for, but as soon as it touched her lips she would push it away like someone suffering from hydrophobia.…She lived only on fruit, such as melons, etc., so as to lessen her tormenting thirst. (p. 9)

But according to Freud, a breakthrough came one day while Anna was under hypnosis.

[S]he grumbled about her English “lady-companion,” whom she did not care for, and went on to describe, with every sign of disgust, how she had once gone into this lady’s room and how her little dog—horrid creature!—had drunk out of a glass there. The patient had said nothing, as she had wanted to be polite. After giving further energetic expression to the anger she had held back, she asked for something to drink, drank a large quantity of water without any difficulty, and awoke from her hypnosis with the glass at her lips; and thereupon the disturbance vanished, never to return. (p.9)

Freud’s interpretation was that Anna had repressed the memory of this incident along with the emotion that it triggered and that this was what had caused her inability to drink. Furthermore, he believed that her recollection of the incident, along with her expression of the emotion she had repressed, caused the symptom to go away.

As an illustration of Freud’s theory, the case study of Anna O. is quite effective. As evidence for the theory, however, it is essentially worthless. The description provides no way of knowing whether Anna had really repressed the memory of the dog drinking from the glass, whether this repression had caused her inability to drink, or whether recalling this “trauma” relieved the symptom. It is also unclear from this case study how typical or atypical Anna’s experience was.

Figure 6.8 Anna O. “Anna O.” was the subject of a famous case study used by Freud to illustrate the principles of psychoanalysis. Source: http://en.wikipedia.org/wiki/File:Pappenheim_1882.jpg

Case studies are useful because they provide a level of detailed analysis not found in many other research methods and greater insights may be gained from this more detailed analysis. As a result of the case study, the researcher may gain a sharpened understanding of what might become important to look at more extensively in future more controlled research. Case studies are also often the only way to study rare conditions because it may be impossible to find a large enough sample of individuals with the condition to use quantitative methods. Although at first glance a case study of a rare individual might seem to tell us little about ourselves, they often do provide insights into normal behavior. The case of HM provided important insights into the role of the hippocampus in memory consolidation.

However, it is important to note that while case studies can provide insights into certain areas and variables to study, and can be useful in helping develop theories, they should never be used as evidence for theories. In other words, case studies can be used as inspiration to formulate theories and hypotheses, but those hypotheses and theories then need to be formally tested using more rigorous quantitative methods. The reason case studies shouldn’t be used to provide support for theories is that they suffer from problems with both internal and external validity. Case studies lack the proper controls that true experiments contain. As such, they suffer from problems with internal validity, so they cannot be used to determine causation. For instance, during HM’s surgery, the surgeon may have accidentally lesioned another area of HM’s brain (a possibility suggested by the dissection of HM’s brain following his death) and that lesion may have contributed to his inability to consolidate new information. The fact is, with case studies we cannot rule out these sorts of alternative explanations. So, as with all observational methods, case studies do not permit determination of causation. In addition, because case studies are often of a single individual, and typically an abnormal individual, researchers cannot generalize their conclusions to other individuals. Recall that with most research designs there is a trade-off between internal and external validity. With case studies, however, there are problems with both internal validity and external validity. So there are limits both to the ability to determine causation and to generalize the results. A final limitation of case studies is that ample opportunity exists for the theoretical biases of the researcher to color or bias the case description. Indeed, there have been accusations that the woman who studied HM destroyed a lot of her data that were not published and she has been called into question for destroying contradictory data that didn’t support her theory about how memories are consolidated. There is a fascinating New York Times article that describes some of the controversies that ensued after HM’s death and analysis of his brain that can be found at: https://www.nytimes.com/2016/08/07/magazine/the-brain-that-couldnt-remember.html?_r=0

Archival Research

Another approach that is often considered observational research involves analyzing archival data that have already been collected for some other purpose. An example is a study by Brett Pelham and his colleagues on “implicit egotism”—the tendency for people to prefer people, places, and things that are similar to themselves (Pelham, Carvallo, & Jones, 2005) [9] . In one study, they examined Social Security records to show that women with the names Virginia, Georgia, Louise, and Florence were especially likely to have moved to the states of Virginia, Georgia, Louisiana, and Florida, respectively.

As with naturalistic observation, measurement can be more or less straightforward when working with archival data. For example, counting the number of people named Virginia who live in various states based on Social Security records is relatively straightforward. But consider a study by Christopher Peterson and his colleagues on the relationship between optimism and health using data that had been collected many years before for a study on adult development (Peterson, Seligman, & Vaillant, 1988) [10] . In the 1940s, healthy male college students had completed an open-ended questionnaire about difficult wartime experiences. In the late 1980s, Peterson and his colleagues reviewed the men’s questionnaire responses to obtain a measure of explanatory style—their habitual ways of explaining bad events that happen to them. More pessimistic people tend to blame themselves and expect long-term negative consequences that affect many aspects of their lives, while more optimistic people tend to blame outside forces and expect limited negative consequences. To obtain a measure of explanatory style for each participant, the researchers used a procedure in which all negative events mentioned in the questionnaire responses, and any causal explanations for them were identified and written on index cards. These were given to a separate group of raters who rated each explanation in terms of three separate dimensions of optimism-pessimism. These ratings were then averaged to produce an explanatory style score for each participant. The researchers then assessed the statistical relationship between the men’s explanatory style as undergraduate students and archival measures of their health at approximately 60 years of age. The primary result was that the more optimistic the men were as undergraduate students, the healthier they were as older men. Pearson’s  r  was +.25.

This method is an example of  content analysis —a family of systematic approaches to measurement using complex archival data. Just as structured observation requires specifying the behaviors of interest and then noting them as they occur, content analysis requires specifying keywords, phrases, or ideas and then finding all occurrences of them in the data. These occurrences can then be counted, timed (e.g., the amount of time devoted to entertainment topics on the nightly news show), or analyzed in a variety of other ways.

Media Attributions

  • What happens when you remove the hippocampus? – Sam Kean by TED-Ed licensed under a standard YouTube License
  • Pappenheim 1882  by unknown is in the  Public Domain .
  • Festinger, L., Riecken, H., & Schachter, S. (1956). When prophecy fails: A social and psychological study of a modern group that predicted the destruction of the world. University of Minnesota Press. ↵
  • Rosenhan, D. L. (1973). On being sane in insane places. Science, 179 , 250–258. ↵
  • Wilkins, A. (2008). “Happier than Non-Christians”: Collective emotions and symbolic boundaries among evangelical Christians. Social Psychology Quarterly, 71 , 281–301. ↵
  • Levine, R. V., & Norenzayan, A. (1999). The pace of life in 31 countries. Journal of Cross-Cultural Psychology, 30 , 178–205. ↵
  • Kraut, R. E., & Johnston, R. E. (1979). Social and emotional messages of smiling: An ethological approach. Journal of Personality and Social Psychology, 37 , 1539–1553. ↵
  • Cohen, D., Nisbett, R. E., Bowdle, B. F., & Schwarz, N. (1996). Insult, aggression, and the southern culture of honor: An "experimental ethnography." Journal of Personality and Social Psychology, 70 (5), 945-960. ↵
  • Watson, J. B., & Rayner, R. (1920). Conditioned emotional reactions. Journal of Experimental Psychology, 3 , 1–14. ↵
  • Freud, S. (1961).  Five lectures on psycho-analysis . New York, NY: Norton. ↵
  • Pelham, B. W., Carvallo, M., & Jones, J. T. (2005). Implicit egotism. Current Directions in Psychological Science, 14 , 106–110. ↵
  • Peterson, C., Seligman, M. E. P., & Vaillant, G. E. (1988). Pessimistic explanatory style is a risk factor for physical illness: A thirty-five year longitudinal study. Journal of Personality and Social Psychology, 55 , 23–27. ↵

Research that is non-experimental because it focuses on recording systemic observations of behavior in a natural or laboratory setting without manipulating anything.

An observational method that involves observing people’s behavior in the environment in which it typically occurs.

When researchers engage in naturalistic observation by making their observations as unobtrusively as possible so that participants are not aware that they are being studied.

Where the participants are made aware of the researcher presence and monitoring of their behavior.

Refers to when a measure changes participants’ behavior.

In the case of undisguised naturalistic observation, it is a type of reactivity when people know they are being observed and studied, they may act differently than they normally would.

Researchers become active participants in the group or situation they are studying.

Researchers pretend to be members of the social group they are observing and conceal their true identity as researchers.

Researchers become a part of the group they are studying and they disclose their true identity as researchers to the group under investigation.

When a researcher makes careful observations of one or more specific behaviors in a particular setting that is more structured than the settings used in naturalistic or participant observation.

A part of structured observation whereby the observers use a clearly defined set of guidelines to "code" behaviors—assigning specific behaviors they are observing to a category—and count the number of times or the duration that the behavior occurs.

An in-depth examination of an individual.

A family of systematic approaches to measurement using qualitative methods to analyze complex archival data.

Research Methods in Psychology Copyright © 2019 by Rajiv S. Jhangiani, I-Chant A. Chiang, Carrie Cuttler, & Dana C. Leighton is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

Share This Book

Logo for Open Educational Resources

Chapter 13. Participant Observation

Introduction.

Although there are many possible forms of data collection in the qualitative researcher’s toolkit, the two predominant forms are interviewing and observing. This chapter and the following chapter explore observational data collection. While most observers also include interviewing, many interviewers do not also include observation. It takes some special skills and a certain confidence to be a successful observer. There is also a rich tradition of what I am going to call “deep ethnography” that will be covered in chapter 14. In this chapter, we tackle the basics of observational data collection.

Null

What is Participant Observation?

While interviewing helps us understand how people make sense of their worlds, observing them helps us understand how they act and behave. Sometimes, these actions and behaviors belie what people think or say about their beliefs and values and practices. For example, a person can tell you they would never racially discriminate, but observing how they actually interact with racialized others might undercut those statements. This is not always about dishonesty. Most of us tend to act differently than we think we do or think we should. That is part of being human. If you are interested in what people say and believe , interviewing is a useful technique for data collection. If you are interested in how people act and behave , observing them is essential. And if you want to know both, particularly how thinking/believing and acting/behaving complement or contradict each other, then a combination of interviewing and observing is ideal.

There are a variety of terms we use for observational data collection, from ethnography to fieldwork to participant observation . Many researchers use these terms fairly interchangeably, but here I will separately define them. The subject of this chapter is observation in general, or participant observation, to highlight the fact that observers can also be participants. The subject of chapter 14 will be deep ethnography , a particularly immersive form of study that is attractive for a certain subset of qualitative researchers. Both participant observation and deep ethnography are forms of fieldwork in which the researcher leaves their office and goes into a natural setting to record observations that take place in that setting. [1]

Participant observation (PO) is a field approach to gathering data in which the researcher enters a specific site for purposes of engagement or observation. Participation and observation can be conceptualized as a continuum, and any given study can fall somewhere on that line between full participation (researcher is a member of the community or organization being studied) and observation (researcher pretends to be a fly on the wall surreptitiously but mostly by permission, recording what happens). Participant observation forms the heart of ethnographic research, an approach, if you remember, that seeks to understand and write about a particular culture or subculture. We’ll discuss what I am calling deep ethnography in the next chapter, where researchers often embed themselves for months if not years or even decades with a particular group to be able to fully capture “what it’s like.” But there are lighter versions of PO that can form the basis of a research study or that can supplement or work with other forms of data collection, such as interviews or archival research. This chapter will focus on these lighter versions, although note that much of what is said here can also apply to deep ethnography (chapter 14).

PO methods of gathering data present some special considerations—How involved is the researcher? How close is she to the subjects or site being studied? And how might her own social location—identity, position—affect the study? These are actually great questions for any kind of qualitative data collection but particularly apt when the researcher “enters the field,” so to speak. It is helpful to visualize where one falls on a continuum or series of continua (figure 13.1).

structured observation qualitative research

Let’s take a few examples and see how these continua work. Think about each of the following scenarios, and map them onto the possibilities of figure 13.1:

  • a nursing student during COVID doing research on patient/doctor interactions in the ICU
  • a graduate student accompanying a police officer during her rounds one day in a part of the city the graduate student has never visited
  • a professor raised Amish who goes back to her hometown to conduct research on Amish marriage practices for one month
  •  (What if the sociologist was also a member of the OCF board and camping crew?)

Depending on how the researcher answers those questions and where they stand on the P.O. continuum, various techniques will be more or less effective. For example, in cases where the researcher is a participant, writing reflective fieldnotes at the end of the day may be the primary form of data collected. After all, if the researcher is fully participating, they probably don’t have the time or ability to pull out a notepad and ask people questions. On the other side, when a researcher is more of an observer, this is exactly what they might do, so long as the people they are interrogating are able to answer while they are going about their business. The more an observer, the more likely the researcher will engage in relatively structured interviews (using techniques discussed in chapters 11 and 12); the more a participant, the more likely casual conversations or “unstructured interviews” will form the core of the data collected. [2]

Observation and Qualitative Traditions

Observational techniques are used whenever the researcher wants to document actual behaviors and practices as they happen (not as they are explained or recorded historically). Many traditions of inquiry employ observational data collection, but not all traditions employ them in the same way. Chapter 14 will cover one very specific tradition: ethnography. Because the word ethnography is sometimes used for all fieldwork, I am calling the subject of chapter 14 deep ethnography, those studies that take as their focus the documentation through the description of a culture or subculture. Deeply immersive, this tradition of ethnography typically entails several months or even years in the field. But there are plenty of other uses of observation that are less burdensome to the researcher.

Grounded Theory, in which theories emerge from a rigorous and systematic process of induction, is amenable to both interviewing and observing forms of data collection, and some of the best Grounded Theory works employ a deft combination of both. Often closely aligned with Grounded Theory in sociology is the tradition of symbolic interactionism (SI). Interviews and observations in combination are necessary to properly address the SI question, What common understandings give meaning to people’s interactions ? Gary Alan Fine’s body of work fruitfully combines interviews and observations to build theory in response to this SI question. His Authors of the Storm: Meteorologists and the Culture of Prediction is based on field observation and interviews at the Storm Prediction Center in Oklahoma; the National Weather Service in Washington, DC; and a few regional weather forecasting outlets in the Midwest. Using what he heard and what he observed, he builds a theory of weather forecasting based on social and cultural factors that take place inside local offices. In Morel Tales: The Culture of Mushrooming , Fine investigates the world of mushroom hunters through participant observation and interviews, eventually building a theory of “naturework” to describe how the meanings people hold about the world are constructed and are socially organized—our understanding of “nature” is based on human nature, if you will.

Phenomenology typically foregrounds interviewing, as the purpose of this tradition is to gather people’s understandings and meanings about a phenomenon. However, it is quite common for phenomenological interviewing to be supplemented with some observational data, especially as a check on the “reality” of the situations being described by those interviewed. In my own work, for example, I supplemented primary interviews with working-class college students with some participant observational work on the campus in which they were studying. This helped me gather information on the general silence about class on campus, which made the salience of class in the interviews even more striking ( Hurst 2010a ).

Critical theories such as standpoint approaches, feminist theory, and Critical Race Theory are often multimethod in design. Interviews, observations (possibly participation), and archival/historical data are all employed to gather an understanding of how a group of persons experiences a particular setting or institution or phenomenon and how things can be made more just . In Making Elite Lawyers , Robert Granfield ( 1992 ) drew on both classroom observations and in-depth interviews with students to document the conservatizing effects of the Harvard legal education on working-class students, female students, and students of color. In this case, stories recounted by students were amplified by searing examples of discrimination and bias observed by Granfield and reported in full detail through his fieldnotes.

Entry Access and Issues

Managing your entry into a field site is one of the most important and nerve-wracking aspects of doing ethnographic research. Unlike interviews, which can be conducted in neutral settings, the field is an actual place with its own rules and customs that you are seeking to explore. How you “gain access” will depend on what kind of field you are entering. If your field site is a physical location with walls and a front desk (such as an office building or an elementary school), you will need permission from someone in the organization to enter and to conduct your study. Negotiating this might take weeks or even months. If your field site is a public site (such as a public dog park or city sidewalks), there is no “official” gatekeeper, but you will still probably need to find a person present at the site who can vouch for you (e.g., other dog owners or people hanging out on their stoops). [3] And if your field site is semipublic, as in a shopping mall, you might have to weigh the pros and cons of gaining “official” permission, as this might impede your progress or be difficult to ascertain whose permission to request. If you recall, many of the ethical dilemmas discussed in chapter 7 were about just such issues.

Even with official (or unofficial) permission to enter the site, however, your quest to gain access is not done. You will still need to gain the trust and permission of the people you encounter at that site. If you are a mere observer in a public setting, you probably do not need each person you observe to sign a consent form, but if you are a participant in an event or enterprise who is also taking notes and asking people questions, you probably do. Each study is unique here, so I recommend talking through the ethics of permission and consent seeking with a faculty mentor.

A separate but related issue from permission is how you will introduce yourself and your presence. How you introduce yourself to people in the field will depend very much on what level of participation you have chosen as well as whether you are an insider or outsider. Sometimes your presence will go unremarked, whereas other times you may stick out like a very sore thumb. Lareau ( 2021 ) advises that you be “vague but accurate” when explaining your presence. You don’t want to use academic jargon (unless your field is the academy!) that would be off-putting to the people you meet. Nor do you want to deceive anyone. “Hi, I’m Allison, and I am here to observe how students use career services” is accurate and simple and more effective than “I am here to study how race, class, and gender affect college students’ interactions with career services personnel.”

Researcher Note

Something that surprised me and that I still think about a lot is how to explain to respondents what I’m doing and why and how to help them feel comfortable with field work. When I was planning fieldwork for my dissertation, I was thinking of it from a researcher’s perspective and not from a respondent’s perspective. It wasn’t until I got into the field that I started to realize what a strange thing I was planning to spend my time on and asking others to allow me to do. Like, can I follow you around and write notes? This varied a bit by site—it was easier to ask to sit in on meetings, for example—but asking people to let me spend a lot of time with them was awkward for me and for them. I ended up asking if I could shadow them, a verb that seemed to make clear what I hoped to be able to do. But even this didn’t get around issues like respondents’ self-consciousness or my own. For example, respondents sometimes told me that their lives were “boring” and that they felt embarrassed to have someone else shadow them when they weren’t “doing anything.” Similarly, I would feel uncomfortable in social settings where I knew only one person. Taking field notes is not something to do at a party, and when introduced as a researcher, people would sometimes ask, “So are you researching me right now?” The answer to that is always yes. I figured out ways of taking notes that worked (I often sent myself text messages with jotted notes) and how to get more comfortable explaining what I wanted to be able to do (wanting to see the campus from the respondent’s perspective, for example), but it is still something I work to improve.

—Elizabeth M. Lee, Associate Professor of Sociology at Saint Joseph’s University, author of Class and Campus Life and coauthor of Geographies of Campus Inequality

Reflexivity in Fieldwork

As always, being aware of who you are, how you are likely to be read by others in the field, and how your own experiences and understandings of the world are likely to affect your reading of others in the field are all very important to conducting successful research. When Annette Lareau ( 2021 ) was managing a team of graduate student researchers in her study of parents and children, she noticed that her middle-class graduate students took in stride the fact that children called adults by their first names, while her working-class-origin graduate students “were shocked by what they considered the rudeness and disrespect middle-class children showed toward their parents and other adults” ( 151 ). This “finding” emerged from particular fieldnotes taken by particular research assistants. Having graduate students with different class backgrounds turned out to be useful. Being reflexive in this case meant interrogating one’s own expectations about how children should act toward adults. Creating thick descriptions in the fieldnotes (e.g., describing how children name adults) is important, but thinking about one’s response to those descriptions is equally so. Without reflection, it is possible that important aspects never even make it into the fieldnotes because they seem “unremarkable.”

The Data of Observational Work: Fieldnotes

In interview data collection, recordings of interviews are transcribed into the data of the study. This is not possible for much PO work because (1) aural recordings of observations aren’t possible and (2) conversations that take place on-site are not easily recorded. Instead, the participant observer takes notes, either during the fieldwork or at the day’s end. These notes, called “fieldnotes,” are then the primary form of data for PO work.

Writing fieldnotes takes a lot of time. Because fieldnotes are your primary form of data, you cannot be stingy with the time it takes. Most practitioners suggest it takes at least the same amount of time to write up notes as it takes to be in the field, and many suggest it takes double the time. If you spend three hours at a meeting of the organization you are observing, it is a good idea to set aside five to six hours to write out your fieldnotes. Different researchers use different strategies about how and when to do this. Somewhat obviously, the earlier you can write down your notes, the more likely they are to be accurate. Writing them down at the end of the day is thus the default practice. However, if you are plainly exhausted, spending several hours trying to recall important details may be counterproductive. Writing fieldnotes the next morning, when you are refreshed and alert, may work better.

Reseaarcher Note

How do you take fieldnotes ? Any advice for those wanting to conduct an ethnographic study?

Fieldnotes are so important, especially for qualitative researchers. A little advice when considering how you approach fieldnotes: Record as much as possible! Sometimes I write down fieldnotes, and I often audio-record them as well to transcribe later. Sometimes the space to speak what I observed is helpful and allows me to be able to go a little more in-depth or to talk out something that I might not quite have the words for just yet. Within my fieldnote, I include feelings and think about the following questions: How do I feel before data collection? How did I feel when I was engaging/watching? How do I feel after data collection? What was going on for me before this particular data collection? What did I notice about how folks were engaging? How were participants feeling, and how do I know this? Is there anything that seems different than other data collections? What might be going on in the world that might be impacting the participants? As a qualitative researcher, it’s also important to remember our own influences on the research—our feelings or current world news may impact how we observe or what we might capture in fieldnotes.

—Kim McAloney, PhD, College Student Services Administration Ecampus coordinator and instructor

What should be included in those fieldnotes? The obvious answer is “everything you observed and heard relevant to your research question.” The difficulty is that you often don’t know what is relevant to your research question when you begin, as your research question itself can develop and transform during the course of your observations. For example, let us say you begin a study of second-grade classrooms with the idea that you will observe gender dynamics between both teacher and students and students and students. But after five weeks of observation, you realize you are taking a lot of notes about how teachers validate certain attention-seeking behaviors among some students while ignoring those of others. For example, when Daisy (White female) interrupts a discussion on frogs to tell everyone she has a frog named Ribbit, the teacher smiles and asks her to tell the students what Ribbit is like. In contrast, when Solomon (Black male) interrupts a discussion on the planets to tell everyone his big brother is called Jupiter by their stepfather, the teacher frowns and shushes him. These notes spark interest in how teachers favor and develop some students over others and the role of gender, race, and class in these teacher practices. You then begin to be much more careful in recording these observations, and you are a little less attentive to the gender dynamics among students. But note that had you not been fairly thorough in the first place, these crucial insights about teacher favoritism might never have been made.

Here are some suggestions for things to include in your fieldnotes as you begin: (1) descriptions of the physical setting; (2) people in the site: who they are and how they interact with one another (what roles they are taking on); and (3) things overheard: conversations, exchanges, questions. While you should develop your own personal system for organizing these fieldnotes (computer vs. printed journal, for example), at a minimum, each set of fieldnotes should include the date, time in the field, persons observed, and location specifics. You might also add keywords to each set so that you can search by names of participants, dates, and locations. Lareau ( 2021:167 ) recommends covering the following key issues, which mnemonically spell out WRITE— W : who, what, when, where, how; R: reaction (responses to the action in question and the response to the response); I: inaction (silence or nonverbal response to an action); T: timing (how slowly or quickly someone is speaking); and E: emotions (nonverbal signs of emotion and/or stoicism).

In addition to the observational fieldnotes, if you have time, it is a good practice to write reflective memos in which you ask yourself what you have learned (either about the study or about your abilities in the field). If you don’t have time to do this for every set of fieldnotes, at least get in the practice of memoing at certain key junctures, perhaps after reading through a certain number of fieldnotes (e.g., every third day of fieldnotes, you set aside two hours to read through the notes and memo). These memos can then be appended to relevant fieldnotes. You will be grateful for them when it comes time to analyze your data, as they are a preliminary by-the-seat-of-your-pants analysis. They also help steer you toward the study you want to pursue rather than allow you to wallow in unfocused data.

Ethics of Fieldwork

Because most fieldwork requires multiple and intense interactions (even if merely observational) with real living people as they go about their business, there are potentially more ethical choices to be made. In addition to the ethics of gaining entry and permission discussed above, there are issues of accurate representation, of respecting privacy, of adequate financial compensation, and sometimes of financial and other forms of assistance (when observing/interacting with low-income persons or other marginalized populations). In other words, the ethical decision of fieldwork is never concluded by obtaining a signature on a consent form. Read this brief selection from Pascale’s ( 2021 ) methods description (observation plus interviews) to see how many ethical decisions she made:

Throughout I kept detailed ethnographic field and interview records, which included written notes, recorded notes, and photographs. I asked everyone who was willing to sit for a formal interview to speak only for themselves and offered each of them a prepaid Visa Card worth $25–40. I also offered everyone the opportunity to keep the card and erase the tape completely at any time they were dissatisfied with the interview in any way. No one asked for the tape to be erased; rather, people remarked on the interview being a really good experience because they felt heard. Each interview was professionally transcribed and for the most part the excerpts in this book are literal transcriptions. In a few places, the excerpta have been edited to reduce colloquial features of speech (e.g., you know, like, um) and some recursive elements common to spoken language. A few excerpts were placed into standard English for clarity. I made this choice for the benefit of readers who might otherwise find the insights and ideas harder to parse in the original. However, I have to acknowledge this as an act of class-based violence. I tried to keep the original phrasing whenever possible. ( 235 )

Summary Checklist for Successful Participant Observation

The following are ten suggestions for being successful in the field, slightly paraphrased from Patton ( 2002:331 ). Here, I take those ten suggestions and turn them into an extended “checklist” to use when designing and conducting fieldwork.

  • Consider all possible approaches to your field and your position relative to that field (see figure 13.2). Choose wisely and purposely. If you have access to a particular site or are part of a particular culture, consider the advantages (and disadvantages) of pursuing research in that area. Clarify the amount of disclosure you are willing to share with those you are observing, and justify that decision.
  • Take thorough and descriptive field notes. Consider how you will record them. Where your research is located will affect what kinds of field notes you can take and when, but do not fail to write them! Commit to a regular recording time. Your field notes will probably be the primary data source you collect, so your study’s success will depend on thick descriptions and analytical memos you write to yourself about what you are observing.
  • Permit yourself to be flexible. Consider alternative lines of inquiry as you proceed. You might enter the field expecting to find something only to have your attention grabbed by something else entirely. This is perfectly fine (and, in some traditions, absolutely crucial for excellent results). When you do see your attention shift to an emerging new focus, take a step back, look at your original research design, and make careful decisions about what might need revising to adapt to these new circumstances.
  • Include triangulated data as a means of checking your observations. If you are that ICU nurse watching patient/doctor interactions, you might want to add a few interviews with patients to verify your interpretation of the interaction. Or perhaps pull some public data on the number of arrests for jaywalking if you are the student accompanying police on their rounds to find out if the large number of arrests you witnessed was typical.
  • Respect the people you are witnessing and recording, and allow them to speak for themselves whenever possible. Using direct quotes (recorded in your field notes or as supplementary recorded interviews) is another way to check the validity of the analyses of your observations. When designing your research, think about how you can ensure the voices of those you are interested in get included.
  •  Choose your informants wisely. Who are they relative to the field you are exploring? What are the limitations (ethical and strategic) in using those particular informants, guides, and gatekeepers? Limit your reliance on them to the extent possible.
  • Consider all the stages of fieldwork, and have appropriate plans for each. Recognize that different talents are required at different stages of the data-collection process. In the beginning, you will probably spend a great deal of time building trust and rapport and will have less time to focus on what is actually occurring. That’s normal. Later, however, you will want to be more focused on and disciplined in collecting data while also still attending to maintaining relationships necessary for your study’s success. Sometimes, especially when you have been invited to the site, those granting access to you will ask for feedback. Be strategic about when giving that feedback is appropriate. Consider how to extricate yourself from the site and the participants when your study is coming to an end. Have an ethical exit plan.
  • Allow yourself to be immersed in the scene you are observing. This is true even if you are observing a site as an outsider just one time. Make an effort to see things through the eyes of the participants while at the same time maintaining an analytical stance. This is a tricky balance to do, of course, and is more of an art than a science. Practice it. Read about how others have achieved it.
  • Create a practice of separating your descriptive notes from your analytical observations. This may be as clear as dividing a sheet of paper into two columns, one for description only and the other for questions or interpretation (as we saw in chapter 11 on interviewing), or it may mean separating out the time you dedicate to descriptions from the time you reread and think deeply about those detailed descriptions. However you decide to do it, recognize that these are two separate activities, both of which are essential to your study’s success.
  • As always with qualitative research, be reflective and reflexive. Do not forget how your own experience and social location may affect both your interpretation of what you observe and the very things you observe themselves (e.g., where a patient says more forgiving things about an observably rude doctor because they read you, a nursing student, as likely to report any negative comments back to the doctor). Keep a research journal!

Further Readings

Emerson, Robert M., Rachel I. Fretz, and Linda L. Shaw. 2011. Writing Ethnographic Fieldnotes . 2nd ed. University of Chicago Press. Excellent guide that uses actual unfinished fieldnote to illustrate various options for composing, reviewing, and incorporating fieldnote into publications.

Lareau, Annette. 2021. Listening to People: A Practical Guide to Interviewing, Participant Observation, Data Analysis, and Writing It All Up . Chicago: University of Chicago Press. Includes actual fieldnote from various studies with a really helpful accompanying discussion about how to improve them!

Wolfinger, Nicholas H. 2002. “On Writing Fieldnotes: Collection Strategies and Background Expectancies.” Qualitative Research 2(1):85–95. Uses fieldnote from various sources to show how the researcher’s expectations and preexisting knowledge affect what gets written about; offers strategies for taking useful fieldnote.

  • Note that leaving one’s office to interview someone in a coffee shop would not be considered fieldwork because the coffee shop is not an element of the study. If one sat down in a coffee shop and recorded observations, then this would be fieldwork. ↵
  • This is one reason why I have chosen to discuss deep ethnography in a separate chapter (chapter 14). ↵
  • This person is sometimes referred to as the [pb_glossary id="389"]informant [/pb_glossary](and more on these characters in chapter 14). ↵

Methodological tradition of inquiry that holds the view that all social interaction is dependent on shared views of the world and each other, characterized through people’s use of language and non-verbal communication.   Through interactions, society comes to be.  The goal of the researcher in this tradition is to trace that construction, as in the case of documenting how gender is “done” or performed, demonstrating the fluidity of the concept (and how it is constantly being made and remade through daily interactions).

Used primarily in ethnography , as in the goal of fieldnotes is to produce a thick description of what is both observed directly (actions, actors, setting, etc.) and the meanings and interpretations being made by those actors at the time.  In this way, the observed cultural and social relationships are contextualized for future interpretation.  The opposite of a thick description is a thin description, in which observations are recorded without any social context or cues to help explain them.  The term was coined by anthropologist Clifford Geertz (see chapter 14 ).

Reflective summaries of findings that emerge during analysis of qualitative data; they can include reminders to oneself for future analyses or considerations, reinterpretations or generations of codes, or brainstorms and concept mapping.

Introduction to Qualitative Research Methods Copyright © 2023 by Allison Hurst is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License , except where otherwise noted.

6.5 Observational Research

Learning objectives.

  • List the various types of observational research methods and distinguish between each
  • Describe the strengths and weakness of each observational research method. 

What Is Observational Research?

The term observational research is used to refer to several different types of non-experimental studies in which behavior is systematically observed and recorded. The goal of observational research is to describe a variable or set of variables. More generally, the goal is to obtain a snapshot of specific characteristics of an individual, group, or setting. As described previously, observational research is non-experimental because nothing is manipulated or controlled, and as such we cannot arrive at causal conclusions using this approach. The data that are collected in observational research studies are often qualitative in nature but they may also be quantitative or both (mixed-methods). There are several different types of observational research designs that will be described below.

Naturalistic Observation

Naturalistic observation  is an observational method that involves observing people’s behavior in the environment in which it typically occurs. Thus naturalistic observation is a type of field research (as opposed to a type of laboratory research). Jane Goodall’s famous research on chimpanzees is a classic example of naturalistic observation. Dr.  Goodall spent three decades observing chimpanzees in their natural environment in East Africa. She examined such things as chimpanzee’s social structure, mating patterns, gender roles, family structure, and care of offspring by observing them in the wild. However, naturalistic observation  could more simply involve observing shoppers in a grocery store, children on a school playground, or psychiatric inpatients in their wards. Researchers engaged in naturalistic observation usually make their observations as unobtrusively as possible so that participants are not aware that they are being studied. Such an approach is called disguised naturalistic observation.  Ethically, this method is considered to be acceptable if the participants remain anonymous and the behavior occurs in a public setting where people would not normally have an expectation of privacy. Grocery shoppers putting items into their shopping carts, for example, are engaged in public behavior that is easily observable by store employees and other shoppers. For this reason, most researchers would consider it ethically acceptable to observe them for a study. On the other hand, one of the arguments against the ethicality of the naturalistic observation of “bathroom behavior” discussed earlier in the book is that people have a reasonable expectation of privacy even in a public restroom and that this expectation was violated. 

In cases where it is not ethical or practical to conduct disguised naturalistic observation, researchers can conduct  undisguised naturalistic observation where the participants are made aware of the researcher presence and monitoring of their behavior. However, one concern with undisguised naturalistic observation is  reactivity. Reactivity  refers to when a measure changes participants’ behavior. In the case of undisguised naturalistic observation, the concern with reactivity is that when people know they are being observed and studied, they may act differently than they normally would. For instance, you may act much differently in a bar if you know that someone is observing you and recording your behaviors and this would invalidate the study. So disguised observation is less reactive and therefore can have higher validity because people are not aware that their behaviors are being observed and recorded. However, we now know that people often become used to being observed and with time they begin to behave naturally in the researcher’s presence. In other words, over time people habituate to being observed. Think about reality shows like Big Brother or Survivor where people are constantly being observed and recorded. While they may be on their best behavior at first, in a fairly short amount of time they are, flirting, having sex, wearing next to nothing, screaming at each other, and at times acting like complete fools in front of the entire nation.

Participant Observation

Another approach to data collection in observational research is participant observation. In  participant observation , researchers become active participants in the group or situation they are studying. Participant observation is very similar to naturalistic observation in that it involves observing people’s behavior in the environment in which it typically occurs. As with naturalistic observation, the data that is collected can include interviews (usually unstructured), notes based on their observations and interactions, documents, photographs, and other artifacts. The only difference between naturalistic observation and participant observation is that researchers engaged in participant observation become active members of the group or situations they are studying. The basic rationale for participant observation is that there may be important information that is only accessible to, or can be interpreted only by, someone who is an active participant in the group or situation. Like naturalistic observation, participant observation can be either disguised or undisguised. In disguised participant observation, the researchers pretend to be members of the social group they are observing and conceal their true identity as researchers. In contrast with undisguised participant observation,  the researchers become a part of the group they are studying and they disclose their true identity as researchers to the group under investigation. Once again there are important ethical issues to consider with disguised participant observation.  First no informed consent can be obtained and second passive deception is being used. The researcher is passively deceiving the participants by intentionally withholding information about their motivations for being a part of the social group they are studying. But sometimes disguised participation is the only way to access a protective group (like a cult). Further,  disguised participant observation is less prone to reactivity than undisguised participant observation. 

Rosenhan’s study (1973) [1]   of the experience of people in a psychiatric ward would be considered disguised participant observation because Rosenhan and his pseudopatients were admitted into psychiatric hospitals on the pretense of being patients so that they could observe the way that psychiatric patients are treated by staff. The staff and other patients were unaware of their true identities as researchers.

Another example of participant observation comes from a study by sociologist Amy Wilkins (published in  Social Psychology Quarterly ) on a university-based religious organization that emphasized how happy its members were (Wilkins, 2008) [2] . Wilkins spent 12 months attending and participating in the group’s meetings and social events, and she interviewed several group members. In her study, Wilkins identified several ways in which the group “enforced” happiness—for example, by continually talking about happiness, discouraging the expression of negative emotions, and using happiness as a way to distinguish themselves from other groups.

One of the primary benefits of participant observation is that the researcher is in a much better position to understand the viewpoint and experiences of the people they are studying when they are apart of the social group. The primary limitation with this approach is that the mere presence of the observer could affect the behavior of the people being observed. While this is also a concern with naturalistic observation when researchers because active members of the social group they are studying, additional concerns arise that they may change the social dynamics and/or influence the behavior of the people they are studying. Similarly, if the researcher acts as a participant observer there can be concerns with biases resulting from developing relationships with the participants. Concretely, the researcher may become less objective resulting in more experimenter bias.

Structured Observation

Another observational method is structured observation. Here the investigator makes careful observations of one or more specific behaviors in a particular setting that is more structured than the settings used in naturalistic and participant observation. Often the setting in which the observations are made is not the natural setting, rather the researcher may observe people in the laboratory environment. Alternatively, the researcher may observe people in a natural setting (like a classroom setting) that they have structured some way, for instance by introducing some specific task participants are to engage in or by introducing a specific social situation or manipulation. Structured observation is very similar to naturalistic observation and participant observation in that in all cases researchers are observing naturally occurring behavior, however, the emphasis in structured observation is on gathering quantitative rather than qualitative data. Researchers using this approach are interested in a limited set of behaviors. This allows them to quantify the behaviors they are observing. In other words, structured observation is less global than naturalistic and participant observation because the researcher engaged in structured observations is interested in a small number of specific behaviors. Therefore, rather than recording everything that happens, the researcher only focuses on very specific behaviors of interest.

Structured observation is very similar to naturalistic observation and participant observation in that in all cases researchers are observing naturally occurring behavior, however, the emphasis in structured observation is on gathering quantitative rather than qualitative data. Researchers using this approach are interested in a limited set of behaviors. This allows them to quantify the behaviors they are observing. In other words, structured observation is less global than naturalistic and participant observation because the researcher engaged in structured observations is interested in a small number of specific behaviors. Therefore, rather than recording everything that happens, the researcher only focuses on very specific behaviors of interest.

Researchers Robert Levine and Ara Norenzayan used structured observation to study differences in the “pace of life” across countries (Levine & Norenzayan, 1999) [3] . One of their measures involved observing pedestrians in a large city to see how long it took them to walk 60 feet. They found that people in some countries walked reliably faster than people in other countries. For example, people in Canada and Sweden covered 60 feet in just under 13 seconds on average, while people in Brazil and Romania took close to 17 seconds. When structured observation  takes place in the complex and even chaotic “real world,” the questions of when, where, and under what conditions the observations will be made, and who exactly will be observed are important to consider. Levine and Norenzayan described their sampling process as follows:

“Male and female walking speed over a distance of 60 feet was measured in at least two locations in main downtown areas in each city. Measurements were taken during main business hours on clear summer days. All locations were flat, unobstructed, had broad sidewalks, and were sufficiently uncrowded to allow pedestrians to move at potentially maximum speeds. To control for the effects of socializing, only pedestrians walking alone were used. Children, individuals with obvious physical handicaps, and window-shoppers were not timed. Thirty-five men and 35 women were timed in most cities.” (p. 186).  Precise specification of the sampling process in this way makes data collection manageable for the observers, and it also provides some control over important extraneous variables. For example, by making their observations on clear summer days in all countries, Levine and Norenzayan controlled for effects of the weather on people’s walking speeds.  In Levine and Norenzayan’s study, measurement was relatively straightforward. They simply measured out a 60-foot distance along a city sidewalk and then used a stopwatch to time participants as they walked over that distance.

As another example, researchers Robert Kraut and Robert Johnston wanted to study bowlers’ reactions to their shots, both when they were facing the pins and then when they turned toward their companions (Kraut & Johnston, 1979) [4] . But what “reactions” should they observe? Based on previous research and their own pilot testing, Kraut and Johnston created a list of reactions that included “closed smile,” “open smile,” “laugh,” “neutral face,” “look down,” “look away,” and “face cover” (covering one’s face with one’s hands). The observers committed this list to memory and then practiced by coding the reactions of bowlers who had been videotaped. During the actual study, the observers spoke into an audio recorder, describing the reactions they observed. Among the most interesting results of this study was that bowlers rarely smiled while they still faced the pins. They were much more likely to smile after they turned toward their companions, suggesting that smiling is not purely an expression of happiness but also a form of social communication.

When the observations require a judgment on the part of the observers—as in Kraut and Johnston’s study—this process is often described as  coding . Coding generally requires clearly defining a set of target behaviors. The observers then categorize participants individually in terms of which behavior they have engaged in and the number of times they engaged in each behavior. The observers might even record the duration of each behavior. The target behaviors must be defined in such a way that different observers code them in the same way. This difficulty with coding is the issue of interrater reliability, as mentioned in Chapter 4. Researchers are expected to demonstrate the interrater reliability of their coding procedure by having multiple raters code the same behaviors independently and then showing that the different observers are in close agreement. Kraut and Johnston, for example, video recorded a subset of their participants’ reactions and had two observers independently code them. The two observers showed that they agreed on the reactions that were exhibited 97% of the time, indicating good interrater reliability.

One of the primary benefits of structured observation is that it is far more efficient than naturalistic and participant observation. Since the researchers are focused on specific behaviors this reduces time and expense. Also, often times the environment is structured to encourage the behaviors of interested which again means that researchers do not have to invest as much time in waiting for the behaviors of interest to naturally occur. Finally, researchers using this approach can clearly exert greater control over the environment. However, when researchers exert more control over the environment it may make the environment less natural which decreases external validity. It is less clear for instance whether structured observations made in a laboratory environment will generalize to a real world environment. Furthermore, since researchers engaged in structured observation are often not disguised there may be more concerns with reactivity.

Case Studies

A  case study  is an in-depth examination of an individual. Sometimes case studies are also completed on social units (e.g., a cult) and events (e.g., a natural disaster). Most commonly in psychology, however, case studies provide a detailed description and analysis of an individual. Often the individual has a rare or unusual condition or disorder or has damage to a specific region of the brain.

Like many observational research methods, case studies tend to be more qualitative in nature. Case study methods involve an in-depth, and often a longitudinal examination of an individual. Depending on the focus of the case study, individuals may or may not be observed in their natural setting. If the natural setting is not what is of interest, then the individual may be brought into a therapist’s office or a researcher’s lab for study. Also, the bulk of the case study report will focus on in-depth descriptions of the person rather than on statistical analyses. With that said some quantitative data may also be included in the write-up of a case study. For instance, an individuals’ depression score may be compared to normative scores or their score before and after treatment may be compared. As with other qualitative methods, a variety of different methods and tools can be used to collect information on the case. For instance, interviews, naturalistic observation, structured observation, psychological testing (e.g., IQ test), and/or physiological measurements (e.g., brain scans) may be used to collect information on the individual.

HM is one of the most notorious case studies in psychology. HM suffered from intractable and very severe epilepsy. A surgeon localized HM’s epilepsy to his medial temporal lobe and in 1953 he removed large sections of his hippocampus in an attempt to stop the seizures. The treatment was a success, in that it resolved his epilepsy and his IQ and personality were unaffected. However, the doctors soon realized that HM exhibited a strange form of amnesia, called anterograde amnesia. HM was able to carry out a conversation and he could remember short strings of letters, digits, and words. Basically, his short term memory was preserved. However, HM could not commit new events to memory. He lost the ability to transfer information from his short-term memory to his long term memory, something memory researchers call consolidation. So while he could carry on a conversation with someone, he would completely forget the conversation after it ended. This was an extremely important case study for memory researchers because it suggested that there’s a dissociation between short-term memory and long-term memory, it suggested that these were two different abilities sub-served by different areas of the brain. It also suggested that the temporal lobes are particularly important for consolidating new information (i.e., for transferring information from short-term memory to long-term memory).

www.youtube.com/watch?v=KkaXNvzE4pk

The history of psychology is filled with influential cases studies, such as Sigmund Freud’s description of “Anna O.” (see Note 6.1 “The Case of “Anna O.””) and John Watson and Rosalie Rayner’s description of Little Albert (Watson & Rayner, 1920) [5] , who learned to fear a white rat—along with other furry objects—when the researchers made a loud noise while he was playing with the rat.

The Case of “Anna O.”

Sigmund Freud used the case of a young woman he called “Anna O.” to illustrate many principles of his theory of psychoanalysis (Freud, 1961) [6] . (Her real name was Bertha Pappenheim, and she was an early feminist who went on to make important contributions to the field of social work.) Anna had come to Freud’s colleague Josef Breuer around 1880 with a variety of odd physical and psychological symptoms. One of them was that for several weeks she was unable to drink any fluids. According to Freud,

She would take up the glass of water that she longed for, but as soon as it touched her lips she would push it away like someone suffering from hydrophobia.…She lived only on fruit, such as melons, etc., so as to lessen her tormenting thirst. (p. 9)

But according to Freud, a breakthrough came one day while Anna was under hypnosis.

[S]he grumbled about her English “lady-companion,” whom she did not care for, and went on to describe, with every sign of disgust, how she had once gone into this lady’s room and how her little dog—horrid creature!—had drunk out of a glass there. The patient had said nothing, as she had wanted to be polite. After giving further energetic expression to the anger she had held back, she asked for something to drink, drank a large quantity of water without any difficulty, and awoke from her hypnosis with the glass at her lips; and thereupon the disturbance vanished, never to return. (p.9)

Freud’s interpretation was that Anna had repressed the memory of this incident along with the emotion that it triggered and that this was what had caused her inability to drink. Furthermore, her recollection of the incident, along with her expression of the emotion she had repressed, caused the symptom to go away.

As an illustration of Freud’s theory, the case study of Anna O. is quite effective. As evidence for the theory, however, it is essentially worthless. The description provides no way of knowing whether Anna had really repressed the memory of the dog drinking from the glass, whether this repression had caused her inability to drink, or whether recalling this “trauma” relieved the symptom. It is also unclear from this case study how typical or atypical Anna’s experience was.

Figure 10.1 Anna O. “Anna O.” was the subject of a famous case study used by Freud to illustrate the principles of psychoanalysis. Source: http://en.wikipedia.org/wiki/File:Pappenheim_1882.jpg

Figure 10.1 Anna O. “Anna O.” was the subject of a famous case study used by Freud to illustrate the principles of psychoanalysis. Source: http://en.wikipedia.org/wiki/File:Pappenheim_1882.jpg

Case studies are useful because they provide a level of detailed analysis not found in many other research methods and greater insights may be gained from this more detailed analysis. As a result of the case study, the researcher may gain a sharpened understanding of what might become important to look at more extensively in future more controlled research. Case studies are also often the only way to study rare conditions because it may be impossible to find a large enough sample to individuals with the condition to use quantitative methods. Although at first glance a case study of a rare individual might seem to tell us little about ourselves, they often do provide insights into normal behavior. The case of HM provided important insights into the role of the hippocampus in memory consolidation. However, it is important to note that while case studies can provide insights into certain areas and variables to study, and can be useful in helping develop theories, they should never be used as evidence for theories. In other words, case studies can be used as inspiration to formulate theories and hypotheses, but those hypotheses and theories then need to be formally tested using more rigorous quantitative methods.

The reason case studies shouldn’t be used to provide support for theories is that they suffer from problems with internal and external validity. Case studies lack the proper controls that true experiments contain. As such they suffer from problems with internal validity, so they cannot be used to determine causation. For instance, during HM’s surgery, the surgeon may have accidentally lesioned another area of HM’s brain (indeed questioning into the possibility of a separate brain lesion began after HM’s death and dissection of his brain) and that lesion may have contributed to his inability to consolidate new information. The fact is, with case studies we cannot rule out these sorts of alternative explanations. So as with all observational methods case studies do not permit determination of causation. In addition, because case studies are often of a single individual, and typically a very abnormal individual, researchers cannot generalize their conclusions to other individuals. Recall that with most research designs there is a trade-off between internal and external validity, with case studies, however, there are problems with both internal validity and external validity. So there are limits both to the ability to determine causation and to generalize the results. A final limitation of case studies is that ample opportunity exists for the theoretical biases of the researcher to color or bias the case description. Indeed, there have been accusations that the woman who studied HM destroyed a lot of her data that were not published and she has been called into question for destroying contradictory data that didn’t support her theory about how memories are consolidated. There is a fascinating New York Times article that describes some of the controversies that ensued after HM’s death and analysis of his brain that can be found at: https://www.nytimes.com/2016/08/07/magazine/the-brain-that-couldnt-remember.html?_r=0

Archival Research

Another approach that is often considered observational research is the use of  archival research  which involves analyzing data that have already been collected for some other purpose. An example is a study by Brett Pelham and his colleagues on “implicit egotism”—the tendency for people to prefer people, places, and things that are similar to themselves (Pelham, Carvallo, & Jones, 2005) [7] . In one study, they examined Social Security records to show that women with the names Virginia, Georgia, Louise, and Florence were especially likely to have moved to the states of Virginia, Georgia, Louisiana, and Florida, respectively.

As with naturalistic observation, measurement can be more or less straightforward when working with archival data. For example, counting the number of people named Virginia who live in various states based on Social Security records is relatively straightforward. But consider a study by Christopher Peterson and his colleagues on the relationship between optimism and health using data that had been collected many years before for a study on adult development (Peterson, Seligman, & Vaillant, 1988) [8] . In the 1940s, healthy male college students had completed an open-ended questionnaire about difficult wartime experiences. In the late 1980s, Peterson and his colleagues reviewed the men’s questionnaire responses to obtain a measure of explanatory style—their habitual ways of explaining bad events that happen to them. More pessimistic people tend to blame themselves and expect long-term negative consequences that affect many aspects of their lives, while more optimistic people tend to blame outside forces and expect limited negative consequences. To obtain a measure of explanatory style for each participant, the researchers used a procedure in which all negative events mentioned in the questionnaire responses, and any causal explanations for them were identified and written on index cards. These were given to a separate group of raters who rated each explanation in terms of three separate dimensions of optimism-pessimism. These ratings were then averaged to produce an explanatory style score for each participant. The researchers then assessed the statistical relationship between the men’s explanatory style as undergraduate students and archival measures of their health at approximately 60 years of age. The primary result was that the more optimistic the men were as undergraduate students, the healthier they were as older men. Pearson’s  r  was +.25.

This method is an example of  content analysis —a family of systematic approaches to measurement using complex archival data. Just as structured observation requires specifying the behaviors of interest and then noting them as they occur, content analysis requires specifying keywords, phrases, or ideas and then finding all occurrences of them in the data. These occurrences can then be counted, timed (e.g., the amount of time devoted to entertainment topics on the nightly news show), or analyzed in a variety of other ways.

Key Takeaways

  • There are several different approaches to observational research including naturalistic observation, participant observation, structured observation, case studies, and archival research.
  • Naturalistic observation is used to observe people in their natural setting, participant observation involves becoming an active member of the group being observed, structured observation involves coding a small number of behaviors in a quantitative manner, case studies are typically used to collect in-depth information on a single individual, and archival research involves analysing existing data.
  • Describe one problem related to internal validity.
  • Describe one problem related to external validity.
  • Generate one hypothesis suggested by the case study that might be interesting to test in a systematic single-subject or group study.
  • Rosenhan, D. L. (1973). On being sane in insane places. Science, 179 , 250–258. ↵
  • Wilkins, A. (2008). “Happier than Non-Christians”: Collective emotions and symbolic boundaries among evangelical Christians. Social Psychology Quarterly, 71 , 281–301. ↵
  • Levine, R. V., & Norenzayan, A. (1999). The pace of life in 31 countries. Journal of Cross-Cultural Psychology, 30 , 178–205. ↵
  • Kraut, R. E., & Johnston, R. E. (1979). Social and emotional messages of smiling: An ethological approach. Journal of Personality and Social Psychology, 37 , 1539–1553. ↵
  • Watson, J. B., & Rayner, R. (1920). Conditioned emotional reactions. Journal of Experimental Psychology, 3 , 1–14. ↵
  • Freud, S. (1961).  Five lectures on psycho-analysis . New York, NY: Norton. ↵
  • Pelham, B. W., Carvallo, M., & Jones, J. T. (2005). Implicit egotism. Current Directions in Psychological Science, 14 , 106–110. ↵
  • Peterson, C., Seligman, M. E. P., & Vaillant, G. E. (1988). Pessimistic explanatory style is a risk factor for physical illness: A thirty-five year longitudinal study. Journal of Personality and Social Psychology, 55 , 23–27. ↵

Creative Commons License

Share This Book

  • Increase Font Size

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • HHS Author Manuscripts

Logo of nihpa

Structured Qualitative Research: Organizing “Mountains of Words” for Data Analysis, both Qualitative and Quantitative

Qualitative research creates mountains of words. U.S. federal funding supports mostly structured qualitative research, which is designed to test hypotheses using semi-quantitative coding and analysis. The authors have 30 years of experience in designing and completing major qualitative research projects, mainly funded by the US National Institute on Drug Abuse [NIDA]. This article reports on strategies for planning, organizing, collecting, managing, storing, retrieving, analyzing, and writing about qualitative data so as to most efficiently manage the mountains of words collected in large-scale ethnographic projects. Multiple benefits accrue from this approach. Several different staff members can contribute to the data collection, even when working from remote locations. Field expenditures are linked to units of work so productivity is measured, many staff in various locations have access to use and analyze the data, quantitative data can be derived from data that is primarily qualitative, and improved efficiencies of resources are developed. The major difficulties involve a need for staff who can program and manage large databases, and who can be skillful analysts of both qualitative and quantitative data.

The Problem

Qualitative research creates Mountains of Words . No matter how large or small the project, the qualitative methodology depends primarily upon eliciting self-reports from subjects or observations made in the field that are transcribed into field notes. Even a small qualitative project easily generates thousands of words. Major ethnographic projects easily generate millions of words. Fortunately recent advances in computer technology and software have made it possible to manage these mountains of words more efficiently, as described below. At every step of conducting research using qualitative methods, researchers and research teams face daunting problems of how to organize, collect, manage, store, retrieve, analyze, and give meaning to the information obtained during qualitative research. This article focuses upon strategies and experiences of the authors, who have conducted a wide variety of research projects which are primarily qualitative in their focus, although some have also included quantitative components. This focus reflects their experience in organizing large qualitative projects so as to routinely manage the data flows into a comprehensive database so as to make subsequent analysis of these data as efficient as possible.

Quantitative research projects ask pre-coded questions and assign numeric values to responses; such pre-coded answers can be analyzed by the straightforward methods delineated in programs like SAS, SPSS, or STATA. By contrast, qualitative research is far less structured and cannot be easily converted into numbers that can be analyzed by such statistical packages.

The field of anthropology continues to have a tradition of a lone investigator conducting fieldwork, often in a foreign country, collecting qualitative data which only they know and understand, and must subsequently analyze for publication—but this is quite rare. During the past 25 years, the U.S. federal government, through the various institutes of the National Institutes of Health and Department of Education, has provided increased funding to support qualitative researchers to address numerous topics. Increasingly, researchers who rely primarily upon federal funding to support qualitative research must grapple with review committees and funding decisions which increasingly insist that qualitative researchers obtain and analyze data to build towards (or include) a quantitative component. This is due to the fact that quantitative research constitutes the dominant methodological paradigm for most social science research supported by governments and foundations. Moreover, most scientific theories and hypotheses are formulated to be answered by quantitative approaches. Review committees often want larger samples, inclusion of special populations, and higher levels of abstraction and theory testing than can be accomplished by smaller qualitative research projects. Even when fortunate enough to receive a federal grant to conduct qualitative research, many problems confront the skilled qualitative researcher. Some urban ethnography is now conducted with multiple ethnographers and field workers; this involves training, coordinating, and structuring the work of those who will conduct the actual research, and systematically recording qualitative information that will often be analyzed by persons who did not collect it. We will call this structured qualitative research . By this we mean that the investigator proposes to study a rather specific topic, and outlines in considerable detail the various dimensions or lines of inquiry that the project will be designed to elicit. While many qualitative researchers make substantial attempts to structure their research, so as to obtain rich responses from subjects and well-written field notes, the careful structuring of qualitative protocols, systematic use of databases for storage, retrieval, and management of the mountains of words collected, and other efficiencies described below represent important advances for qualitative researchers to consider.

An additional expectation from grant and peer reviewers is that the investigator will propose specific hypotheses and analyze the qualitative data by employing analytic approaches that are what we call semi-quantitative . This means that some segments of textual data will be coded in such a way that it allows for rough numerical counts or proportions to be reported along with typical quotes from qualitative respondents. That is, a small portion of the Mountains of Words is partially converted from textual data into assigned numbers or variables that can be counted and employed like quantitative data sets. The structured qualitative research reported here has generated a coding system that permits individual interview responses to be converted into numeric codes for subsequent quantitative analysis. This article will report on strategies for planning, organizing, collecting, managing, storing, retrieving, coding, analyzing, and writing so that qualitative methods can most efficiently manage the Mountain of Words collected in large-scale ethnographic projects. With this focus in mind, we will report on strategies for handling textual data, and not with many other important issues that arise in qualitative research. Other articles in this special issue address important topics—which this article only mentions in passing. The authors have also addressed such issues in prior publications.

For example, this article ignores issues associated with human subjects, informed consent 1 (Dunlap and Johnson 2005), sampling of respondents, safety in field settings (Dunlap and Johnson 1992; Williams et al 1992 ;), and many other related issues (although they may be mentioned in passing). Rather, the authors focus upon several key stages or procedures which support structured qualitative research for managing, eliciting, and analyzing Mountains of Words.

The authors have engaged in several major qualitative research projects, with a primary focus upon patterns of illicit drug use and distribution. All of these projects have been funded by the National Institute on Drug Abuse (NIDA). The senior author, Johnson, was trained as a quantitative researcher at Columbia University, but has been associated with several leading ethnographers and qualitative researchers during his career. The second author, Dunlap, was primarily trained as a qualitative researcher at University of California at Berkeley, while the third author, Benoit, has training in historical methods and qualitative research at New York University. All three have worked together on several different qualitative research projects. From 1988 to the present, Johnson and Dunlap have conducted qualitative studies on crack distribution and substance abuse, crack and crime, and several others involving quantitative studies of arrestees. Recently, Dunlap and Johnson have conducted qualitative studies of household violence, transient domesticity, marijuana and blunts, and most recently studies of drug markets in New Orleans and Houston following Hurricane Katrina. Each of these studies has involved very similar qualitative methods for framing research questions, developing qualitative protocols, organizing data, storing and managing textual data, and then analyzing the data. These projects have resulted in close to 100 different publications since 1990 in a wide variety of journals and books. Some of these are cited at different points in the analysis that follows.

We also recognize that many other ethnographers have conducted excellent qualitative studies, and arrived at different solutions and uses of software, and analyzed both qualitative and quantitative data, resulting in the publication of peer-reviewed articles. Such researchers may find our experience helpful at improving their efficiency in managing the Mountain of Words in their future research. Likewise, colleagues conducting research outside the USA and international settings may find these experiences informative.

A central conclusion which has been widely noted by many qualitative researchers is that continuing and recent advances in computer hardware (almost all computers in 2008 have extensive RAM and hard drive storage capacity) ( Fielding and Lee 1991 ), software ( Bazeley 2002 , MacMillan and Koenig 2004 ), internet access ( Mann and Stewart 2000 ), and cellular communication technology has transformed the efficiency for managing mountains of words ( Mangabeira, Lee and Fielding 2004 ). The strategies reported below are based on efficient use of some of this technology. Nevertheless, the authors struggle to keep up with and use these technological advances—a struggle that will continue in the foreseeable future. We address how researchers can most efficiently use such technological advances to conduct their qualitative research on a wide range of topics.

In the authors’ experience, most qualitative analysis programs, such as Ethnograph ( Sidel and Friese 1998 ), Atlas ( Muhr 2005 ), Nudist ( Crowley, Harré, Tagg 2002 ), Nvivo ( Bazeley 2002 ), and others have major limitations (see Barry 1998 ; Brent and Slusarz 2003 , Gilbert et al 2004, MacMillan and Koenig 2004 ) that have proven unsatisfactory for the management of the mountains of words collected in the above-mentioned research projects (also see Manwar, Dunlap, Johnson 1993). The major shortcomings of these programs stem from their being word based. While these programs are efficient in searching for individual words or organizing some collection of words or text, they do so across the entire database. Such programs generate far too many “hits” to systematically retrieve targeted data that is useful for analysis purposes. By contrast, structured qualitative research is organized around questions that are carefully framed by the investigator to systematically elicit answers from respondents. This approach can help qualitative researchers to obtain, store, organize, and analyze the data more efficiently and effectively.

Thus, one of the most fundamental and important decisions in conducting structured qualitative research—which is best made at the beginning of a research project—is the choice of a software program that can integrate information across many different functions and purposes. This allows for one software program to maintain most of the information and data to be collected during qualitative research. During the past nine years, the authors have found one relational database which provides that integrative function. FileMaker_Pro (2007) (now in version 9) is a true relational database that has been successful for managing qualitative data, as well as being a major “storehouse” for answers to individual questions by ethnographic respondents. This product has continuously upgraded its capabilities. Moreover, our staff experience has developed over the years, and they have learned how to employ this program’s capabilities. Staff now can now design, accomplish, and integrate many functions that were previously disconnected in different programs or done by paper and pencil and US mails, as described below.

We currently have three major databases (each is over 60 MB), each containing extensive amounts of qualitative data. We reference our experience and findings from some of these projects to illustrate some of the points made below. These three projects are briefly described here; citations are also provided to published articles that contain more details about the samples and qualitative methods employed.

  • Marijuana/blunts: This project was a five-year study (2002-2007) in New York City which recruited 100 current and active marijuana users who were longitudinally followed for three to four years; they were re-interviewed on several occasions following a structured qualitative protocol (similar to that described below). In addition, this project developed a quantitative protocol derived from insights gained during the qualitative research in year one. This peer group questionnaire was developed in year two and administered during years two and three; 550 additional respondents completed this protocol (Ream et al 2005, 2006 , 2007, 2008). An additional sub-study recruited marijuana/blunts users who allowed their recent purchase of marijuana to be weighed so that price/gm could be calculated ( Sifaneck et al 2007 ).
  • Transient domesticity : This study was built upon 10 years of previous qualitative research in the 1990s. An entirely qualitative study (2003-2008), it investigates the role of (often transient) male partners in the households of poor African-American women with children. It documents whether and how violence occurs within the male-female relationship, and how drug use and sales activities impact upon household functioning. Ninety-two carefully selected focal subjects (plus partners and other household members) were recruited in years one and two. These subjects completed a baseline protocol, and most have been re-interviewed approximately every six months during the remaining three years.
  • Katrina project: This study (2006-2010) is in mid-data collection; this article is among the early publications from this project (also see Johnson, Dunlap, Morse 2007 ). More extensive data analysis is just beginning. This structured qualitative study investigates the reformulation of illicit drug markets among New Orleans evacuees both in New Orleans and in Houston. The staff anticipates that over 150 active drug users and sellers of several illegal drugs will be recruited and interviewed in these two locations. Several focus groups are being conducted. This qualitative project faces the additional difficulty of being conducted at sites (New Orleans and Houston) far removed from the investigator’s work location (New York). Most of the examples provided below are drawn from this project, because of the important advances made in the effective use of the technology and software that make this project innovative. We provide illustrative materials in boxes (labeled as Figures).

Planning structured qualitative research

The most fundamental requirement is to be clear about what the qualitative research project is designed to accomplish. This is especially important when writing a complex application to a federal agency for funding, and hoping that it will be funded after several reviews. The application must propose a specific focus for the research, delineate specific aims, indicate its significance, describe preliminary research, and provide detailed methods and analysis plans. This is where scientific innovation and clarity of purpose are especially important, as only a few applications receive good priority scores, and even fewer receive funding. The aims are provided in the abstract of the Katrina project in Figure 1 .

An external file that holds a picture, illustration, etc.
Object name is nihms65965f1.jpg

AIMS and Abstract of Katrina Project

Many qualitative research projects are organized around four types of data collection approaches that yield extensive amounts of textual data:

  • Field notes -- written observations of the field setting that record what is seen, heard, or observed.
  • Baseline qualitative protocol -- a carefully developed interview schedule that elicit stories and accounts from respondents. Usually interviews are recorded and transcribed, generating large numbers of words. When carefully recorded and transcribed the questions and answers become the data elements.
  • Follow-up qualitative protocol – this protocol is very similar to the baseline, but often asks fewer questions, requests updates, and provides information needed to study change across time.
  • Focus group protocols -- the focus group usually involves three to 10 persons who are asked to address a limited group of questions for which they have the expertise to provide illuminating information. In our experience, the more carefully structured each of these types of data collection approaches, the better the quality of data that is elicited. We consider each of these in some detail below.

Depending upon the aims of the project, many other issues may need further elaboration. In the Katrina project, this involved development of question domains regarding time windows and illicit drug markets being studied. The baseline protocol described below includes questions asked of respondents for the month before Katrina, during the week of the Katrina Hurricane and flooding of New Orleans, a month after evacuation from New Orleans, and in the present (in the past 30 days) at the time of the interview. The 30-day window was chosen to reflect a common time period which persons could recall with some accuracy; in fact many respondents reported their recollections without reference to a 30 day window. Likewise, other questions are focused upon a respondent’s participation as a user and/or seller in various illegal markets: cannabis, heroin, crack/cocaine, and other illicit drugs. Many other details are fleshed out in the research methods section of the application and in the development of protocols and data collection devices—as well in other prior publications ( Davis et al 2006 ; Lewis et al 1992 ). These aspects need to be systematically asked and their answers recorded in a highly structured fashion in order to obtain roughly comparable data from a wide variety of participating research subjects.

Training and supervising ethnographic staff

No matter how well conceptualized the qualitative research project has been, actual implementation is fraught with many difficulties. A typical problem involves hiring of qualified staff. The reality is that very few persons have all the requisite qualitative skills for studying illegal drug markets in the field. Investigators must often choose between two types of staff. Persons having good educational credentials (e.g. masters or doctoral level training in qualitative research, persons in anthropology or sociology) often lack street contacts and connections with illegal drug users at research sites. By contrast, “street savvy” persons may have excellent contacts within drug user circles, but lack educational credentials, writing skills, and/or training in qualitative research methods. Social workers with several years of experience often have both the credentials and street savvy to become good ethnographers. Over several years and projects, our experience has been that the “street savvy” person often makes the better field worker because such work enhances their employment record, and so they will remain loyal staff members for the duration of the project and often across several projects. The graduate student or well-educated person often has considerable difficulty accessing illicit drug users and markets; they also often leave before the project ends (due to graduation, higher paid jobs, or other reasons). Our best ethnographers have been persons who have recovered from heroin/crack abuse as young adults, but who have gotten bachelor and masters degrees, and have been trained to routinely write rich field notes and conduct high quality interviews with extensive probing. 2

Far more important than hiring the “right person” is the investigator’s ability to systematically train all staff in the details of the qualitative research process, and give them time to upgrade their skills, until they accomplish it well. Especially in locations (such as New Orleans and Houston) remote from the investigator’s office (in New York City), having doctoral level consultants or co-investigators who can provide ongoing staff supervision greatly enhances the project. At project startup, all staff needs five days or more of training in various procedures and approaches. This involves both formal training and actual practice sessions. Formal training includes:

  • in-depth discussion about the practical application and reading of previously published articles about entering the field and conducting research among drug users ( Dunlap and Johnson 1998 ; Dunlap et al 1993 ),
  • maintaining personal safety for both subjects and field staff in dangerous situations ( Williams et al 1992 ),
  • informed consent procedures approved by the institutional review board,
  • ethical issues that may arise ( Dunlap and Johnson 2006 ),
  • the basics of observations and field note writing,
  • the conduct of personal interviews, and conceptualization of the major issues to be researched,
  • how to account for and record expenses incurred during the study;
  • plans for regular staff meetings.

The investigator also needs to ensure that all staff members clearly understand the project’s aims and purpose. The investigator must provide considerable clarity about what each staff person is expected to produce in terms of field notes and interviews during a week or month. The initial training usually does not include development of protocols, which comes later. This training also includes instruction in how to follow detailed written procedures for submission of textual data or recordings (described in more detail below).

An important component of training staff occurs via role-playing, so as to develop experience in observation and field note writing, interviewing, and other skills that are expected. When each staff member can effectively demonstrate their skills via role playing, they are sent into the field setting to begin work and practice their skills. Training in observations and writing field notes educates staff about how to enter the field and make the ethnographers’ presence known and accepted. It also leads to discussion about the difficulties and inconsistencies that the main work will entail. After initial training, a most important activity is to provide regular ongoing supervision—biweekly staff conference calls usually can accomplish this. It is easier to monitor quantity of production than the quality of work this way because writing field notes and awaiting transcription of interviews takes more time. Ethnographers also need to conduct their research outside of regular business hours; they need to adjust their work schedule to fit the typical work hours of potential subjects—active drug users and sellers, in the Katrina project.

Recording and managing units of work

As staff begin conducting the qualitative research activity, they need to be trained, directed, and supervised to routinely create units of work . A good operational definition in qualitative research: a unit of work is equivalent to a file—usually written in Microsoft Word—that describes staff effort accomplished at a specific date, time, location, and with a particular focus. This is especially important with regard to field notes. Moreover, each unit of work needs to have “header information” that is standard across all units of work. The field worker records a file name, date, the approximate time, his or her name and location, description of subject(s) or other information as requested. The header used in the Katrina study is provided in figure 2 . The controlling feature is the filename, in which is embedded: the type of work done (field note or voice recorded interview, the staff member who collected data, the date it was collected, and whether it refers to a specific respondent or not. The additional information in the header needs to be systematically recorded. The information in the header will become a field in the FileMaker Pro database, which organizes all the units of work for analysis at a later time.

An external file that holds a picture, illustration, etc.
Object name is nihms65965f2.jpg

Example of Header Information and an Observation Only Field Note.

When a field note is written, or other units of work created, it is critically important that each ethnographer maintain every file they generate on their own computer for subsequent cross checking. They should also print out and maintain paper copies of their units of work for ease of cross checking the specific units of work with the staff maintaining the integrating project data base (see below). All field notes and units of work need to be routinely submitted or uploaded electronically to the central data repository of the project (as described below).

Observations and descriptive field notes

One of the most important skills for ethnographers is the ability to enter a field setting, make systematic observations, hold informal conversations with persons present and screen for “persons of interest”--in our case those involved as consumers or sellers of illegal drugs. Sometimes their observations may result in new contacts or in no contacts and yet provide interesting information that needs to be recorded (e.g. a community demonstration against destruction of public housing in New Orleans). At other times staff will conduct informal conversations as they go about their work. A major difficulty can arise here. Staff needs considerable pressure and supervision to routinely generate field notes about each unit of work. In a large-scale qualitative research project, each field note is the rough equivalent of a short interview in a quantitative research project. The ethnographers needs considerable self-discipline to make 1-3 hours of direct observations in the field, and then spend several hours writing field notes based upon those observations. Indeed, writing the field notes may take more time than making the observations. The field notes need to be very descriptive of what was observed, without being judgmental or disrespectful of the behaviors observed or the lifestyle of the people observed. Many staff may lack skills in writing clear sentences, reproducing interactions or informal conversations with potential subjects, and developing extensive and thickly descriptive field notes. It often takes much time and intensive supervision to get staff both trained and systematically supervised to write excellent field notes. Having ethnographers write descriptive field notes is much more important than the grammar and sentence structure. The important issue is whether the field note coveys clearly what happened/was observed. Further, since observations are focused upon persons engaged in illegal behaviors (drug sales), observations will need to include interpretations of the meanings implied by words used by respondents—as well as what might be learned by returning for additional observations later. Our experience has been that each unit of work must have its own field note. Thus, if a field worker makes observations at three different locations, each of those locations needs to result in a separate field note because it is a unique setting and a different unit of work. Due to the need to maintain confidentiality of respondents (who have given informed consent) and potential respondents or persons present in field settings (who do not know of the ethnographer’s role), no cameras of any kind are employed.

Likewise, if the interviewer conducts a personal interview with a subject that covers only the first 25 questions in the protocol, they need to write a field note about that unit of work and clearly indicate that they covered only these 25 questions. It would be another unit of work when they conduct another interview that covers the last set of questions. Two types of field notes will be generated: “Linked field notes” describe contacts with or interviews with a specific person chosen as a research subject. If that subject is contacted or observed in the field on other dates, each different contact should have its own field note with a different file name, but that person’s ID number is recorded in the file name and the header.

“Observation only field notes” result when the ethnographer goes into the field and only observes the general scene and maybe has informal conversations with unknown persons (not research subjects); such a field note should record what was observed (without using actual names). In the Katrina project, staff have found it important to also document that “nothing is happening” or that no one was observed at specific locations where previously active drug distribution was observed.

Across several weeks and months of data collection, numerous field notes get written, and expenses associated with conducting field research and payments for interviews begin to pile up. Keeping track of all these units of work becomes extremely cumbersome and time consuming when using only paper files. Managing paper files was especially problematic in New Orleans, as mail service (both pickup and delivery), was severely disrupted by the flooding of most local post offices. Electricity and especially internet service was far more reliable than conventional mail six months after the Hurricane.

Managing expenses and units of work

During the first nine months of the Katrina project, staff recorded their interviews and field notes and submitted them on diskettes and/or paper files to the central office where the data was stored. This rapidly bogged down all project work because ethnographers spent too much time keeping track of paper work and several paper copies were lost in the mails. Some other mechanism was needed. Fortunately, the organization’s [National Development and Research Institutes, Inc.-NDRI] network and web-based software has now developed to the point where data can be recorded electronically and uploaded successfully across vast distances, with confidentiality ensured by double password protection. As a result, FileMaker Pro was able to integrate two major data management functions that were very time consuming and error prone.

Field expenses

While conducting ethnographic research, staff in New Orleans and Houston drive vast distances to meet with subjects or to make observations, thus incurring mileage expenses. In addition they purchase sodas or food for potential subjects while making observations. When qualitative protocols are conducted, they provide incentives for completing interviews. Previously each of these expenses needed to be recorded on paper receipts and submitted on a timely basis to ensure reimbursement. 3 Instead, we now use programmer-developed scripts in FileMaker Pro that computerized the recording of field expenses. Further, each field note was linked to its associated expenses. Ethnographers in Houston and New Orleans go to a specified website on the Internet and enter expense information directly into the project database in New York. Staff members enter each type of expense associated with each unit of work (a field note or personal interview), effectively billing the project for those units of work. For auditing purposes, a reconciliation of expenses is generated and submitted to the fiscal department. The project advances money via funds transfer directly into the ethnographers’ personal bank accounts. This procedure substantially reduces the time devoted to accounting for field expenses and provides nearly instantaneous recording of expenses and advances to the ethnographer. Figure 3 displays a screen shot of the expenses associated with a field note and personal interviews.

An external file that holds a picture, illustration, etc.
Object name is nihms65965f3.jpg

Screen shot from File Maker Pro of Expenses associated with one Field note and two personal interviews.

Uploading field notes and files

The web-based interface now enables ethnographers in remote sites to directly submit their work to the central repository on the NDRI network. On a routine basis, ethnographers now upload each and every field note via the internet. In addition, digital voice files recorded during personal interviews (see below), often 5-20 megabytes in size, are required to be uploaded to the project repository. This means that staff no longer need to mail diskettes or CDs containing their work to New York. Virtually all raw data files submitted by staff members are now stored electronically in the project repository where one administrator is responsible for keeping the data flow organized. Staff conference calls scheduled every two weeks enable the investigators to provide feedback about the quality of work by ethnographic staff, and to discuss emergent findings.

Protocols for eliciting rich qualitative data

After several weeks of ethnographic observations and systematic review of field notes, and drawing upon the collective experience of the research team, staff members collaborate to develop a protocol that is specifically designed to elicit rich qualitative data. This involves several meetings with all staff present, usually several drafts of each question, reorganization of question order, and clarification about the major focus of the project. Two general classes of questions are developed. “Concrete questions” are designed to measure relatively common roles or phenomena, such as demographics (gender, ethnicity, age) and education, residence and residential locations, etc.; these are fairly easy to develop but often elicit answers that may not be straightforward to code. By contrast, “thematic questions” are designed to elicit extensive reports and stories about the focal topics of interest (e.g. drug use patterns, drug sales activity, perceptions of illegal drug markets). Five thematic questions from the 15-page qualitative protocol for the Katrina project are provided in figure 4 .

An external file that holds a picture, illustration, etc.
Object name is nihms65965f4.jpg

Questions as written in the structured qualitative protocol. (Five of 101 questions.)

Now let’s talk about during the disaster and before your evacuation from New Orleans. We refer below to DURING KATRINA DISASTER: this began on Aug 29 (Monday) and lasted thru September 6 (Tues) when virtually everyone was evacuated from New Orleans.

In the process of developing structured qualitative protocols, staff develop a lead question about a specific topic, along with several probes to be asked of persons who respond positively. But since all possible probes cannot be included in a protocol, ethnographers also need to be trained to listen carefully, and then ask appropriate probes. This open-ended questioning approach is also seeking the meaning of specific argot terms (e.g. “trees” for marijuana— Johnson et al 2006 ). A serendipitous observation by the subject may provide an insight about tactics to conceal distribution (e.g. a car wash provides a cover for selling).

A key strategy in the development of probes is to ask respondents to “tell us a story” about that topic, with a follow-up question, “is there anything else?” This strategy is designed to let people talk as much as they want to about that topic and provide further interesting details. In their “story” is the rich data which qualitative methodology is so excellent at obtaining. But a major shortcoming of qualitative methodology is that so many follow-up questions can be generated that the interview becomes very long. Another limitation is that respondents often talk about issues that are “off the topic” of the question; they need to be reminded to address the topic.

The Principal Investigator is responsible for developing and finalizing the written interview protocol for purposes of data collection. Project staff also needs careful instruction and systematic training to read a question as written and if necessary repeat it. Likewise, each staff member must be carefully trained to understand the question in the same way. Every question is discussed in detail, with everyone stating their understanding of the purpose of the question and the domain to be tapped. Staff also needs to follow the order of questions in the protocol. They need to clearly state the question number being asked so that the transcriber types the question number and exact words spoken by the interviewer. [The administrator enters data from the transcript into the correct fields in the database, see below.] After the protocol is developed and the staff trained to conduct the protocol, each ethnographer begins to recruit subjects and conduct personal interviews. While completing the informed consent process, each subject is asked to give a preferred code name which is the only identifying information recorded during interviews. Administration of this protocol serves three important functions: 1) the respondent has time to feel comfortable with the ethnographer, 2) the ethnographer develops rapport with the subject and can elicit more honest answers or disclosures, and 3) the respondent provides rich data—their stories—and answers the questions poised by the ethnographer. With a lengthy qualitative protocol such as that used in the Katrina project (with 100 main questions; each with many probes), a given subject may need two to four interview sessions to complete it.

Recording interviews with digital voice recorders

During the 1990s and early 2000s, our qualitative research team used tape recorders containing cassettes on which interviews were recorded. These cassettes would then be submitted to the central repository, and subsequently transcribed. But such cassettes could be lost, some times had bad recording quality, and other limitations. Another major technological advance, digital voice recorders (DVR), now provide much superior sound quality and recording accuracy for about the same price as cassette recorders. DVRs are more compact and easier to use (with training), and eliminate the need for multiple steps involved with handling recorded cassettes. They are especially valuable in field settings where much background noise may interfere with recording. The quality of recording with a DVR can be improved by having a remote microphone directed at the respondent, and by having the interviewer sit near the recording device when asking questions. One drawback of the digital voice recorder is that it is sometimes easy to unintentionally delete a previously recorded file. A second limitation is that digital voice files grow to be very large and become more difficult to handle. Staff members report that it is best to record for a half to three-quarters of an hour on one digital voice file, take a break, and resume the interview but with a different digital voice file. Thus, one interview often has two related digital voice files. All files are electronic digital files that can be easily stored and transmitted using computer technology.

After the interview is complete, the interviewer connects the digital voice recorder to their computer and copies/transfers that voice file into their computer; it is given a filename parallel to the field note protocol, but the voice digital voice file name begins with V(voice) rather than F (field note). After being stored on the ethnographer’s hard drive, each digital voice file can then be easily uploaded via the Internet to the central data repository for this project. Only the field worker has rights (password protected) to upload files into their location in the repository. The confidentiality and anonymity of both the digital voice file and the associated field notes are maintained but recorded in the central repository for subsequent handling.

Managing the central repository

The central repository is the location where all of the original raw data files submitted by the ethnographic staff are stored on a network hard drive (and backed up to CDs). Each staff member has their own unique location for storing their field notes, digital voice files, and transcripts. One administrative staff person is responsible for managing all of the files.

Every field note and digital voice file uploaded by field staff is entered into a spreadsheet that tracks the progress of that unit of work. This spreadsheet is routinely provided to each field worker in order for them to cross check that the central repository contains all the work they have done; if something is missing, the appropriate files are uploaded again.

Each digital voice file is uploaded via the internet to a transcriber who has been specially trained to maintain the confidentiality of these data and to accurately reproduce the exact words spoken by the interviewer and each research subject. After transcription, a Word file containing the transcript is forwarded to the administrator along with the invoice for that transcript; the administrator then pays for and stores the transcript in the central repository for review by the ethnographer who completed that interview. Each digital voice file and its associated transcripts are also tracked for the timeliness, quality, and accuracy of transcribing. The administrator can also correct misspellings, and/or listen to the audio files to resolve “inaudibles.”

Building an integrated qualitative data base

Careful programming of FileMaker Pro provides a highly structured environment in which the administrator effectively copies text from the transcripts and pastes the contents into a field in the relational database. This relational database systematically links each field note and each transcript with the subject ID number and with the name of the ethnographer who generated it, as well as the expenses associated with that unit of work.

The project database is maintained on a network drive that allows several people to work with the database at the same time. This FileMaker Pro database has been programmed so that each question in the structured interview protocol has a location (or field) in the database where a subject’s answer to that question is to be stored. The administrator reviews the Word transcript, then copies and pastes the answers to each question in the appropriate field in the database. The end result is a database which displays the code names and numbers of subjects interviewed in a row at the top of the screen and the answers to each question (with the questions) in fields below. A screenshot of a portion of one subject’s answers from the database is displayed in figure 5 . This also displays the semi-quantitative coding system described below.

An external file that holds a picture, illustration, etc.
Object name is nihms65965f5.jpg

Screen shot from the Katrina Project Database (program: FileMakerPro). These windows contain:

  • The question as written in the structured qualitative protocol (two examples)
  • Window containing the text addressing that question, both the interviewer’s question and respondent’s answer.
  • Semi-quantitative code categories developed for that question.
  • Numeric code(s) entered by coder based upon the answer (2) and code categories (3).
  • “Info” button provides demographic information about the subject, interviewer, and date.

The administrator also systematically reviews the database for missing information and blank cells. Data may be missing for a given question because the interviewer has not yet completed the interview with that subject, or the question may have been appropriately skipped following skip instructions in the interview protocol. It is also possible that the interviewer failed to ask the question or get a response to a particular item. Follow-up protocols and interviews with the same subjects can be organized in the same manner as the baseline protocol, but with the follow-up questions added to the list of variables, and recorded as separate interviews. Focus group protocols are also entered into the database, but often these subjects were not part of the main qualitative study and hence have no personal interview. Focus group subjects are handled differently, organized according to the questions asked.

The end result is a large and extensive database containing mountains of words. But the structured organization of the data permits the staff members to efficiently conduct analyses with the database, as described next.

Retrieval of relevant responses for a topic

At the end of 2007, approximately 106 Katrina subjects have been recruited and have completed at least one interview, but the majority has completed the full baseline protocol and their responses are entered into the database. FileMaker Pro offers several ways of accessing this data and generating results specific to a given topic. A straight-forward approach is to select 2-4 questions asked, and then read every respondent’s answers to that question on the screen. The analyst can select sections of text (quotes) and paste them into a working file for later use. The “Info” button provides the analyst with a summary of key information about that respondent (code name, gender, ethnicity, age, date of interview, interviewer).

An important function is the query , a procedure for locating responses to specific questions associated with the topic that the analysts wish to address. The analyst specifies the questions in the interview protocol that they wish to obtain. The query will rapidly return all relevant fields for all subjects for that question. The output from a query can be generated as a Word document, or as an Excel file, or in other files which the analyst might desire. If so desired, only specific subsets of respondents can be queried; for instance, a particular query can be limited to include only those subjects interviewed in New Orleans, or only those engaged in heroin use. Depending on the topic chosen, additional questions in the database may be obtained. For example, the qualitative protocol (and database) contains four questions about violence before Katrina, during Katrina, shortly after Katrina, and at the current time. But many more questions inquire about participation in drug markets in these time periods, and they can be queried and analyzed for relationship to violence. Overall the Katrina database makes access to the information extremely easy to retrieve and use for analysis purposes. But the way that analysis is approached will determine what is retrieved and how it will be analyzed.

Qualitative analysis with the textual data

While the database will provide mountains of words regarding a particular topic, the hard work just begins. The analyst will need to carefully read through several screens of narrative or pages of printout narratives, and try to figure out how to use these data in a report they may wish to develop. A purely qualitative analysis will identify and examine certain themes that emerged from a careful reading of the data. Particularly useful quotes may be identified, copied, and used in the written report. This kind of analysis is especially useful in identifying contradictions between respondent reports and what might be obtained in quantitative analysis. For example, one paper from the Transient Domesticity project analyzed qualitative data about beatings in childhood. When asked, “ Did you receive beatings while growing up in your household ?” more than half of the subjects denied it. However, ethnographic probing and analysis of their stories revealed that a negative answer did not mean the absence of physical assault. Rather, many of the respondents regarded the physical punishments they received as distinct from being “ beaten. ” They provided rich stories about how they had been beaten (from the perspective of the analyst). Some legitimated their punishment as “ spankings ” or as “ deserved .” The published report provides quoted materials from several subjects who reported that they had not been beaten, but actually were, during their childhood as depicted by their stories ( Dunlap Golub Johnson 2003b ; also see Dunlap et al 2003a and Dunlap et al 2006 ). This suggests that the scientific (and layperson’s) understanding of a phenomenon (beatings) may not mean the same thing to respondents and that a “yes or no” answer in a quantitative survey may provide results that seriously under-count the phenomenon—the better data is elicited by probing the subject.

Semi-quantitative coding

Some questions in the interview protocol and answers stored in the database may be especially amenable to transformation into quantitative codes like those used in typical surveys. Generally these involve demographics or other concrete roles. The analyst can code persons according to gender and levels of education. Our experience indicates that respondent answers about their ethnicity and age are often difficult to code into standard close ended categories. For example, when asked to explain their ethnicity, many people provide extensive answers about their ancestor’s ethnicity and backgrounds. Likewise, the simple question, “how old are you?” often elicits a story about where they were born and a variety of evasive answers, so that a specific age is difficult to code. Asking their date of birth more often elicits a standard answer of month, day, and year—from which a specific numeric age can be calculated and assigned. Other relatively easy to code variables include their reported ZIP code of residence, the neighborhood where they reside, who the occupants of their households are, whether they are employed or not, and descriptions of their job type. Although many words may be available in the qualitative dataset, the answers given by most subjects to questions like these can be classified into categories for which numeric codes can be assigned. These quantitative codes—derived from qualitative data—are most useful in describing the characteristics of the persons sampled and/or providing basic descriptives of their background that may be relevant to analyst efforts to provide a written analysis of the topic. Typically such data are presented as percentages in a table accompanying the qualitative analysis of a topic (see Ream et al 2006 ).

Most qualitative research, however, is designed to illuminate phenomena that are not so easily transformed into numeric codes. On any given topic and in answers to interviewers’ questions, respondent stories may be highly differentiated and varied in content. The analyst must read through much textual data trying to locate and understand different themes or uniformity that emerge from respondent stories. This includes identifying textual material (appropriate quotes) that exemplify the themes (or categories) relevant for the written report. The analyst needs to define categories for the different themes and may assign some numeric value to each category. We refer to the process as semi-quantitative coding because the actual referent and the phenomena being so classified are diffuse and not widely understood or agreed upon in American culture. The assignment of such (numeric) semi-quantitative codes also grossly simplifies and possibly reifies the answers given by subjects. But if and when the coding scheme has been generated, the analytic staff can encode each respondent’s answer according to those themes, and classify the responses into numbers. Note that many subject files may be missing data or contain textual information that cannot be coded into any of the classifications. Whether and how to use these semi-quantitative codes will remain the task of the analyst developing a paper.

In the Katrina project, the investigators developed one or two semi-quantitative codes (or variables) for each question in the qualitative protocol. In Figure 5 , the right side contains the semi-quantitative codes (a list of drugs) developed for this question (with codes 0-19). Well-trained coders read the text on the left and decide which codes on the right are mentioned in the text. They then enter the code number(s) in the box in the middle. If multiple codes are appropriate, the coder enters them separated by a space (in this example codes for cocaine (2) and heroin (4) are entered). While the coding process is relatively straightforward, it is very tedious and time consuming. To read and code the Mountain of Words recorded in the Katrina data base, it took several coders approximately half a year to systematically code the 106 subjects. When the coding process was completed, FileMaker Pro enabled export of these codes into an Excel file which was then converted for use in a quantitative program such as SPSS or SAS. This means that the semi-quantitative codes can now be analyzed quantitatively in conjunction with the extensive qualitative data. Future analyses and publications can provide both qualitative and quantitative analyses of these rich data.

One additional software package—gotomypc.com—now permits all project staff (with appropriate password protection) to access and read (and copy data) from the FileMaker Pro database while working from remote sites (e.g. at home, while traveling, or at offices in New Orleans or Houston) without needing to install FileMaker Pro on their remote computer. Thus, all project staff can work on/with the same database, often at the same time.

Creating quantitative protocols from ethnographic insights

One important outcome of qualitative research is to better inform the development of quantitative protocols to develop better information about phenomena of interest. That is, during the collection and analysis of qualitative data, important insights about social processes are uncovered. These insights can be used to develop detailed close-ended codes for inclusion in a quantitative protocol that can then be administered to many additional respondents. While all quantitative protocols will lack the rich detail obtained in qualitative data, quantitative data and results will provide better information about the numbers and proportions of subjects who were actively involved in that topic. With both quantitative and qualitative data, the analysts can write reports that are richer because they contain both numerically viable information and detailed understanding of phenomena.

As an example, one publication ( Sifaneck et al 2007 ) from the Marijuana/Blunts project highlights how the following hypothesis was tested: Advantaged persons pay much more than the less advantaged for their marijuana in NYC. During the first year of research, ethnographers observed that marijuana retail sales units varied from $5 to $50 and more; both marijuana sellers and consumers reported substantial differences in quality and type of marijuana. No one knew the actual weights and price per gram of retail marijuana purchases; all lacked scientific precision. Ethnographic observations among marijuana smokers recruited from a variety of SES groups in NYC also suggested that white middle class consumers usually purchased “cubes” of high quality marijuana for $50 from concealed delivery services, while smokers in poor communities usually purchased “bags” of lower quality marijuana for $10 - $20. A special subproject was designed to collect both qualitative and quantitative data in a systematic fashion. Among their wide contacts, ethnographers were able to recruit marijuana buyers who allowed 99 purchases to be weighed. Each subject collaborated with the ethnographer to weigh a recent marijuana purchase on a digital scale (accurate to 100 th of a gram), and then answered several questions about themselves and the product (gender, ethnicity, SES, dealer type, quality of marijuana, price paid). Independent of the subject’s quality rating, our experienced ethnographers observed and “graded” the quality of the marijuana purchase. Since the subject always retained possession of their marijuana, no legal issues arose. In the analysis of these data, Sifaneck (et al 2007) systematically described the differences (from qualitative data) between “designer” marijuana (usually grown hydroponically with flowering buds preserved by plastic cubes) and “commercial” marijuana (usually grown outdoors, compressed into bricks for transport, and sold in ziplock baggies). Furthermore, designer marijuana was almost exclusively purchased from private delivery services by middle and upper income persons—usually whites with good legal incomes and working in lower Manhattan. Commercial marijuana was sold by a range of distributors including street sellers, storefronts, and some private residences, but rarely by a delivery service. The quantitative analysis of the weights and price data indicate clear differences in price per gram between the purchases of commercial (average $8.20/g) and designer (average $18.02/g) marijuana. Designer purchases were often sold with brand names describing actual strains like Sour Diesel and White Widow; these were only sold in downtown markets to persons who paid $50 (or more) for 2.5 g in a cube. Commercial marijuana purchases were more likely to be made by blacks, uptown (Harlem), via street dealers, and in units of $5, $10, and $20 bags. Imported commercial types Arizona and Chocolate were only found uptown. Logistic regression indicated that the distinction between Designer and Commercial purchases was the most important factor in price paid per gram--more important than [but highly correlated with] gender, ethnicity, dealer type, or location in New York City.

Conclusions

This article provides an overview of the authors’ efforts and experience in efficiently managing Mountains of Words that are collected during large qualitative research projects like the three mentioned above. The planning, organizing, collecting, transcribing, storing, retrieving, coding, and analytic approaches described herein are necessary to facilitate the hard work associated with data analysis and report writing. While the developed procedures may be reasonably efficient and effective in locating and retrieving appropriate and highly relevant qualitative textual segments, the analyst retains the responsibility for all aspects of preparing an article for journal publication. After retrieving a query containing questions and answers that are highly relevant to a given topic, the analyst and research teams will often have to review the text in pages of quoted materials, searching for respondent statements that clearly indicate something about a given theme. Even after identifying such quoted materials, and arranging them according to thematic content, many other issues arise. The analyst will then need to review the relevant scientific literature to frame the key themes or ideas that emerge from ethnographic data, place the research methods and findings within the context of this literature, and write a coherent text or narrative incorporating these qualitative data into an article that makes an important scientific contribution to the published literature. Perhaps such information may provide important guidance for intervention agents and agencies. This article cannot provide more guidance about how to complete such a report.

Researchers planning ethnographic projects in the future will need to be more efficient in the use of limited financial resources. The technological advances for organizing and storing and retrieving data described above can help in this regard. Although the structured qualitative research approach has been useful in managing the mountains of words generated by our projects, we recognize that many other researchers have also been successful in accomplishing the major functions of planning, collecting, storing, and organizing qualitative analysis. Many have used other ethnographic programs, including Ethnograph, Atlas, Nudist, etc. Our experience has been that these programs have severe limitations for retrieving appropriate materials from a large quantity of words collected from respondents. Yet they may be very appropriate for specific types of analysis that ethnographers conduct. Indeed, a variety of other approaches for integrating qualitative and quantitative data analysis are available ( MacMillan and Koenig 2004 ; Mangabeira, Lee and Fielding 2004 ; Miles and Huberman 1994 ).

A true relational data base such as FileMaker Pro, which we prefer, has a few drawbacks. The average social scientist will need to master it effectively or have the support of an experienced database programmer to create the appropriate FileMaker Pro database structure and create special reports as needed. Its capabilities are more successfully exploited when operating in a well-networked environment (like a university or major research institution) where well-trained database staff can assist. Nevertheless, the investigators and analysts need to invest a substantial amount of time learning how to use this program efficiently and effectively to conduct both the qualitative and semi-quantitative data analysis described above. Outside of the United States, investigators may also find that the procedures outlined above may prove useful in planning to conduct ethnographic research. The authors would be pleased to provide further elaboration about some of the issues addressed above, and can be contacted at the e-mail address above.

Acknowledgments

Preparation of this paper was supported by grants from the National Institute on Drug Abuse (R01 DA021783-03, 1R01 DA13690-05, R01 DA009056-12, 5T32 DA07233-24), and by National Development and Research Institutes. Points of view, opinions, and conclusions in this paper do not necessarily represent the official position of the U.S. Government or National Development and Research Institutes. The authors acknowledge with appreciation the many contributions to this research by Lawrence Duncan, Stanley Hoogerwerf, Joseph Kotarba, Edward Morse, Gwangi Richardson-Alston, Claudia Jenkins, and Vicki Zaleski.

1 The specific projects reported below, and all prior research, have been carefully reviewed by the institutional review board; further all persons participating as research subjects have given their informed consent prior to interview and are compensated for their information and time.

2 Active drug users and/or persons in recovery often lack many essential skills, have difficulty mastering the skill of writing rich field notes, and often leave before the project ends.

3 This paper receipt worked in all prior projects conducted in NYC. But paper receipts for field expenses are now giving way to similar electronic filing and documentation.

  • Barry Christine A. Choosing qualitative data analysis software: Atlas/ti and Nudist compared. Sociological Research Online. 1998; 3 :1–16. [ Google Scholar ]
  • Bazeley P. The evolution of a project involving an integrated analysis of structured qualitative and quantitative data: From N3 to Nvivo. International Journal of Social Research Methodology. 2002; 5 (3):229–243. [ Google Scholar ]
  • Benoit Ellen, Randolph Doris, Dunlap Eloise, Johnson Bruce D. Code switching and inverse imitation among marijuana-smoking crack sellers. British Journal of Criminology. 2003; 43 (3):506–525. [ Google Scholar ]
  • Brent Edward, Slusarz Pawel. “Feeling the beat”: intelligent coding advice from metaknowledge in qualitative research. Social Science Computer Review. 2003; 21 (3):281–303. [ Google Scholar ]
  • Crowley C, Harré R, Tagg C. Qualitative research and computing: methodological issues and practices in using QSR NVivo and NUD* IST. International Journal of Social Research Methodology. 2002; 5 :193–199. [ Google Scholar ]
  • Davis W Rees, Johnson Bruce D, Liberty Hilary, Randolph Doris. Street drugs: Obtaining reliable self and surrogate reports about the use and sale of crack, powder cocaine, and heroin. In: Cole Spencer., editor. Street Drugs: New Research. Hauppauge, NY: Nova Science Publishers; 2006. pp. 55–79. [ Google Scholar ]
  • Dunlap Eloise, Johnson Bruce D. Ethical and legal dilemmas in ethnographic field research: Three case studies. In: Buchanan David., editor. Ethical and Legal Issues in Research with High-Risk Populations: Addressing Threats of Suicide, Child Abuse, And Violence. Washington, DC: American Psychological Association; 2006. [ Google Scholar ]
  • Dunlap Eloise, Benoit Ellen, Sifaneck Stephen J, Johnson Bruce D. Social constructions of dependency by blunts smokers: Qualitative reports. International Journal of Drug Policy. 2006; 17 :171–182. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Dunlap Eloise, Golub Andrew, Johnson Bruce D. The lived experience of welfare reform in drug-using welfare-needy households in inner-city New York. Journal of Sociology and Social Welfare. 2003a; 30 (3):39–58. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Dunlap Eloise, Golub Andrew, Johnson Bruce D. Girls’ sexual development in the inner city: From compelled childhood sexual contact to sex-for-things exchanges. Journal of Child Sexual Abuse. 2003b; 12 (2):73–96. [ PubMed ] [ Google Scholar ]
  • Dunlap Eloise, Johnson Bruce D, Morse Edward. Illicit drug markets among New Orleans Evacuees before and soon after hurricane Katrina. Journal of Drug Issues. 2007; 37 (4):981–1006. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Dunlap Eloise, Johnson Bruce D, Sanabria Harry, Holliday Elbert, Lipsey Vickie, Barnett Maurice, Hopkins William, Sobel Ira, Randolph Doris, Chin Ko-lin. Studying crack users and their criminal careers. In: Newman William M, Boudreau Frances A., editors. Understanding Social Life: A Reader in Sociology. Minneapolis: West Publishing; 1993. pp. 43–53. [ Google Scholar ]
  • Dunlap Eloise, Johnson Bruce D. Gaining access to hidden populations: Strategies for gaining cooperation of sellers/dealers in ethnographic research. In: De La Rosa Mario, Segal Bernard, Lopez Richard., editors. Conducting Drug Abuse Research with Minority Populations: Advances and Issues. Wilmington, PA: Hayworth Press; 1998. [ Google Scholar ]
  • Fielding Nigel G, Lee Raymond M. Using Computers in Qualitative Research. Newbury Park, CA: Sage; 1991. [ Google Scholar ]
  • FileMakerPro. 2008. http://www.filemaker.com/
  • Gerbert B, Caspers N, Moe J, Clanon K, Abercrombie P, Herzig K. The mysteries and demands of HIV care: Qualitative analyses of HIV specialists’ views on their expertise. AIDS Care. 2004; 16 :363–376. [ PubMed ] [ Google Scholar ]
  • Johnson Bruce D, Bardhi Flutura, Sifaneck Stephen J, Dunlap Eloise. Marijuana argot as subculture threads: Social constructions by users in New York City. British Journal of Criminology. 2006; 46 (1):46–77. [ Google Scholar ]
  • Lewis Carla, Johnson Bruce D, Golub Andrew L, Dunlap Eloise. Studying crack abusers: Strategies for recruiting the right tail of an ill-defined population. Journal of Psychoactive Drugs. 1992; 24 (3):323–336. [ PubMed ] [ Google Scholar ]
  • MacMillan Katie, Koenig Thomas. The wow factor: Preconceptions and expectations for data analysis software in qualitative research. Social Science Computer Review. 2004; 22 (2):179–186. [ Google Scholar ]
  • Mangabeira Wilma C, Lee Raymond M, Fielding Nigel G. Computers and Qualitative Research: Adoption, Use, and Representation. Social Science Computer Review. 2004; 22 :167. [ Google Scholar ]
  • Mann Chris, Stewart Fiona. Internet Communication and Qualitative Research: A Handbook for Researching Online. Sage Publications; 2000. [ Google Scholar ]
  • Manwar Ali, Dunlap Eloise, Johnson Bruce. Qualitative data analysis with HyperText: A case study of New York City crack dealers. Qualitative Sociology. 1994; 17 (3):283–292. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Miles Matthew B, Huberman Michael A. Qualitative data analysis: An expanded sourcebook. Thousand Oaks: Sage Publications; 1994. [ Google Scholar ]
  • Muhr Thomas. ATLAS ti Qualitative Software Package. Berlin: Scientific Software; 2005. [ Google Scholar ]
  • Ream Geoffrey, Johnson Bruce D, Sifaneck Stephen, Dunlap Eloise. Distinguishing blunts users from joints users: A comparison of marijuana use subcultures. In: Cole Spencer., editor. Street Drugs: New Research. Hauppauge, NY: Nova Science Publishers; 2006. pp. 245–273. [ Google Scholar ]
  • Richards Tom. An intellectual history of NUD* IST and NVivo. International Journal of Social Research Methodology. 2002; 5 :199–214. [ Google Scholar ]
  • Seidel John, Friese S. Ethnograph v 5. 0: A Program for the Analysis of Text Based Data.”. Colorado Springs: Qualis Research Associates; 1998. [ Google Scholar ]
  • Sifaneck Stephen J, Ream Geoffrey, Johnson Bruce D, Dunlap Eloise. Retail marijuana purchases in designer and commercial markets in New York City: Sales units, weights, and prices per gram. Drug and Alcohol Dependence. 2007; 90S :S40–S51. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Williams Terry, Dunlap Eloise, Johnson Bruce D, Hamid Ansley. Personal safety in dangerous places. Journal of Contemporary Ethnography. 1992; 21 (3):343–374. [ PMC free article ] [ PubMed ] [ Google Scholar ]

Observation Method in Psychology: Naturalistic, Participant and Controlled

Saul Mcleod, PhD

Editor-in-Chief for Simply Psychology

BSc (Hons) Psychology, MRes, PhD, University of Manchester

Saul Mcleod, PhD., is a qualified psychology teacher with over 18 years of experience in further and higher education. He has been published in peer-reviewed journals, including the Journal of Clinical Psychology.

Learn about our Editorial Process

Olivia Guy-Evans, MSc

Associate Editor for Simply Psychology

BSc (Hons) Psychology, MSc Psychology of Education

Olivia Guy-Evans is a writer and associate editor for Simply Psychology. She has previously worked in healthcare and educational sectors.

On This Page:

The observation method in psychology involves directly and systematically witnessing and recording measurable behaviors, actions, and responses in natural or contrived settings without attempting to intervene or manipulate what is being observed.

Used to describe phenomena, generate hypotheses, or validate self-reports, psychological observation can be either controlled or naturalistic with varying degrees of structure imposed by the researcher.

There are different types of observational methods, and distinctions need to be made between:

1. Controlled Observations 2. Naturalistic Observations 3. Participant Observations

In addition to the above categories, observations can also be either overt/disclosed (the participants know they are being studied) or covert/undisclosed (the researcher keeps their real identity a secret from the research subjects, acting as a genuine member of the group).

In general, conducting observational research is relatively inexpensive, but it remains highly time-consuming and resource-intensive in data processing and analysis.

The considerable investments needed in terms of coder time commitments for training, maintaining reliability, preventing drift, and coding complex dynamic interactions place practical barriers on observers with limited resources.

Controlled Observation

Controlled observation is a research method for studying behavior in a carefully controlled and structured environment.

The researcher sets specific conditions, variables, and procedures to systematically observe and measure behavior, allowing for greater control and comparison of different conditions or groups.

The researcher decides where the observation will occur, at what time, with which participants, and in what circumstances, and uses a standardized procedure. Participants are randomly allocated to each independent variable group.

Rather than writing a detailed description of all behavior observed, it is often easier to code behavior according to a previously agreed scale using a behavior schedule (i.e., conducting a structured observation).

The researcher systematically classifies the behavior they observe into distinct categories. Coding might involve numbers or letters to describe a characteristic or the use of a scale to measure behavior intensity.

The categories on the schedule are coded so that the data collected can be easily counted and turned into statistics.

For example, Mary Ainsworth used a behavior schedule to study how infants responded to brief periods of separation from their mothers. During the Strange Situation procedure, the infant’s interaction behaviors directed toward the mother were measured, e.g.,

  • Proximity and contact-seeking
  • Contact maintaining
  • Avoidance of proximity and contact
  • Resistance to contact and comforting

The observer noted down the behavior displayed during 15-second intervals and scored the behavior for intensity on a scale of 1 to 7.

strange situation scoring

Sometimes participants’ behavior is observed through a two-way mirror, or they are secretly filmed. Albert Bandura used this method to study aggression in children (the Bobo doll studies ).

A lot of research has been carried out in sleep laboratories as well. Here, electrodes are attached to the scalp of participants. What is observed are the changes in electrical activity in the brain during sleep ( the machine is called an EEG ).

Controlled observations are usually overt as the researcher explains the research aim to the group so the participants know they are being observed.

Controlled observations are also usually non-participant as the researcher avoids direct contact with the group and keeps a distance (e.g., observing behind a two-way mirror).

  • Controlled observations can be easily replicated by other researchers by using the same observation schedule. This means it is easy to test for reliability .
  • The data obtained from structured observations is easier and quicker to analyze as it is quantitative (i.e., numerical) – making this a less time-consuming method compared to naturalistic observations.
  • Controlled observations are fairly quick to conduct which means that many observations can take place within a short amount of time. This means a large sample can be obtained, resulting in the findings being representative and having the ability to be generalized to a large population.

Limitations

  • Controlled observations can lack validity due to the Hawthorne effect /demand characteristics. When participants know they are being watched, they may act differently.

Naturalistic Observation

Naturalistic observation is a research method in which the researcher studies behavior in its natural setting without intervention or manipulation.

It involves observing and recording behavior as it naturally occurs, providing insights into real-life behaviors and interactions in their natural context.

Naturalistic observation is a research method commonly used by psychologists and other social scientists.

This technique involves observing and studying the spontaneous behavior of participants in natural surroundings. The researcher simply records what they see in whatever way they can.

In unstructured observations, the researcher records all relevant behavior with a coding system. There may be too much to record, and the behaviors recorded may not necessarily be the most important, so the approach is usually used as a pilot study to see what type of behaviors would be recorded.

Compared with controlled observations, it is like the difference between studying wild animals in a zoo and studying them in their natural habitat.

With regard to human subjects, Margaret Mead used this method to research the way of life of different tribes living on islands in the South Pacific. Kathy Sylva used it to study children at play by observing their behavior in a playgroup in Oxfordshire.

Collecting Naturalistic Behavioral Data

Technological advances are enabling new, unobtrusive ways of collecting naturalistic behavioral data.

The Electronically Activated Recorder (EAR) is a digital recording device participants can wear to periodically sample ambient sounds, allowing representative sampling of daily experiences (Mehl et al., 2012).

Studies program EARs to record 30-50 second sound snippets multiple times per hour. Although coding the recordings requires extensive resources, EARs can capture spontaneous behaviors like arguments or laughter.

EARs minimize participant reactivity since sampling occurs outside of awareness. This reduces the Hawthorne effect, where people change behavior when observed.

The SenseCam is another wearable device that passively captures images documenting daily activities. Though primarily used in memory research currently (Smith et al., 2014), systematic sampling of environments and behaviors via the SenseCam could enable innovative psychological studies in the future.

  • By being able to observe the flow of behavior in its own setting, studies have greater ecological validity.
  • Like case studies , naturalistic observation is often used to generate new ideas. Because it gives the researcher the opportunity to study the total situation, it often suggests avenues of inquiry not thought of before.
  • The ability to capture actual behaviors as they unfold in real-time, analyze sequential patterns of interactions, measure base rates of behaviors, and examine socially undesirable or complex behaviors that people may not self-report accurately.
  • These observations are often conducted on a micro (small) scale and may lack a representative sample (biased in relation to age, gender, social class, or ethnicity). This may result in the findings lacking the ability to generalize to wider society.
  • Natural observations are less reliable as other variables cannot be controlled. This makes it difficult for another researcher to repeat the study in exactly the same way.
  • Highly time-consuming and resource-intensive during the data coding phase (e.g., training coders, maintaining inter-rater reliability, preventing judgment drift).
  • With observations, we do not have manipulations of variables (or control over extraneous variables), meaning cause-and-effect relationships cannot be established.

Participant Observation

Participant observation is a variant of the above (natural observations) but here, the researcher joins in and becomes part of the group they are studying to get a deeper insight into their lives.

If it were research on animals , we would now not only be studying them in their natural habitat but be living alongside them as well!

Leon Festinger used this approach in a famous study into a religious cult that believed that the end of the world was about to occur. He joined the cult and studied how they reacted when the prophecy did not come true.

Participant observations can be either covert or overt. Covert is where the study is carried out “undercover.” The researcher’s real identity and purpose are kept concealed from the group being studied.

The researcher takes a false identity and role, usually posing as a genuine member of the group.

On the other hand, overt is where the researcher reveals his or her true identity and purpose to the group and asks permission to observe.

  • It can be difficult to get time/privacy for recording. For example, researchers can’t take notes openly with covert observations as this would blow their cover. This means they must wait until they are alone and rely on their memory. This is a problem as they may forget details and are unlikely to remember direct quotations.
  • If the researcher becomes too involved, they may lose objectivity and become biased. There is always the danger that we will “see” what we expect (or want) to see. This problem is because they could selectively report information instead of noting everything they observe. Thus reducing the validity of their data.

Recording of Data

With controlled/structured observation studies, an important decision the researcher has to make is how to classify and record the data. Usually, this will involve a method of sampling.

In most coding systems, codes or ratings are made either per behavioral event or per specified time interval (Bakeman & Quera, 2011).

The three main sampling methods are:

Event-based coding involves identifying and segmenting interactions into meaningful events rather than timed units.

For example, parent-child interactions may be segmented into control or teaching events to code. Interval recording involves dividing interactions into fixed time intervals (e.g., 6-15 seconds) and coding behaviors within each interval (Bakeman & Quera, 2011).

Event recording allows counting event frequency and sequencing while also potentially capturing event duration through timed-event recording. This provides information on time spent on behaviors.

Coding Systems

The coding system should focus on behaviors, patterns, individual characteristics, or relationship qualities that are relevant to the theory guiding the study (Wampler & Harper, 2014).

Codes vary in how much inference is required, from concrete observable behaviors like frequency of eye contact to more abstract concepts like degree of rapport between a therapist and client (Hill & Lambert, 2004). More inference may reduce reliability.

Macroanalytic coding systems

Macroanalytic coding systems involve rating or summarizing behaviors using larger coding units and broader categories that reflect patterns across longer periods of interaction rather than coding small or discrete behavioral acts. 

For example, a macroanalytic coding system may rate the overall degree of therapist warmth or level of client engagement globally for an entire therapy session, requiring the coders to summarize and infer these constructs across the interaction rather than coding smaller behavioral units.

These systems require observers to make more inferences (more time-consuming) but can better capture contextual factors, stability over time, and the interdependent nature of behaviors (Carlson & Grotevant, 1987).

Microanalytic coding systems

Microanalytic coding systems involve rating behaviors using smaller, more discrete coding units and categories.

For example, a microanalytic system may code each instance of eye contact or head nodding during a therapy session. These systems code specific, molecular behaviors as they occur moment-to-moment rather than summarizing actions over longer periods.

Microanalytic systems require less inference from coders and allow for analysis of behavioral contingencies and sequential interactions between therapist and client. However, they are more time-consuming and expensive to implement than macroanalytic approaches.

Mesoanalytic coding systems

Mesoanalytic coding systems attempt to balance macro- and micro-analytic approaches.

In contrast to macroanalytic systems that summarize behaviors in larger chunks, mesoanalytic systems use medium-sized coding units that target more specific behaviors or interaction sequences (Bakeman & Quera, 2017).

For example, a mesoanalytic system may code each instance of a particular type of therapist statement or client emotional expression. However, mesoanalytic systems still use larger units than microanalytic approaches coding every speech onset/offset.

The goal of balancing specificity and feasibility makes mesoanalytic systems well-suited for many research questions (Morris et al., 2014). Mesoanalytic codes can preserve some sequential information while remaining efficient enough for studies with adequate but limited resources.

For instance, a mesoanalytic couple interaction coding system could target key behavior patterns like validation sequences without coding turn-by-turn speech.

In this way, mesoanalytic coding allows reasonable reliability and specificity without requiring extensive training or observation. The mid-level focus offers a pragmatic compromise between depth and breadth in analyzing interactions.

Preventing Coder Drift

Coder drift results in a measurement error caused by gradual shifts in how observations get rated according to operational definitions, especially when behavioral codes are not clearly specified.

This type of error creeps in when coders fail to regularly review what precise observations constitute or do not constitute the behaviors being measured.

Preventing drift refers to taking active steps to maintain consistency and minimize changes or deviations in how coders rate or evaluate behaviors over time. Specifically, some key ways to prevent coder drift include:
  • Operationalize codes : It is essential that code definitions unambiguously distinguish what interactions represent instances of each coded behavior. 
  • Ongoing training : Returning to those operational definitions through ongoing training serves to recalibrate coder interpretations and reinforce accurate recognition. Having regular “check-in” sessions where coders practice coding the same interactions allows monitoring that they continue applying codes reliably without gradual shifts in interpretation.
  • Using reference videos : Coders periodically coding the same “gold standard” reference videos anchors their judgments and calibrate against original training. Without periodic anchoring to original specifications, coder decisions tend to drift from initial measurement reliability.
  • Assessing inter-rater reliability : Statistical tracking that coders maintain high levels of agreement over the course of a study, not just at the start, flags any declines indicating drift. Sustaining inter-rater agreement requires mitigating this common tendency for observer judgment change during intensive, long-term coding tasks.
  • Recalibrating through discussion : Having meetings for coders to discuss disagreements openly explores reasons judgment shifts may be occurring over time. Consensus on the application of codes is restored.
  • Adjusting unclear codes : If reliability issues persist, revisiting and refining ambiguous code definitions or anchors can eliminate inconsistencies arising from coder confusion.

Essentially, the goal of preventing coder drift is maintaining standardization and minimizing unintentional biases that may slowly alter how observational data gets rated over periods of extensive coding.

Through the upkeep of skills, continuing calibration to benchmarks, and monitoring consistency, researchers can notice and correct for any creeping changes in coder decision-making over time.

Reducing Observer Bias

Observational research is prone to observer biases resulting from coders’ subjective perspectives shaping the interpretation of complex interactions (Burghardt et al., 2012). When coding, personal expectations may unconsciously influence judgments. However, rigorous methods exist to reduce such bias.

Coding Manual

A detailed coding manual minimizes subjectivity by clearly defining what behaviors and interaction dynamics observers should code (Bakeman & Quera, 2011).

High-quality manuals have strong theoretical and empirical grounding, laying out explicit coding procedures and providing rich behavioral examples to anchor code definitions (Lindahl, 2001).

Clear delineation of the frequency, intensity, duration, and type of behaviors constituting each code facilitates reliable judgments and reduces ambiguity for coders. Application risks inconsistency across raters without clarity on how codes translate to observable interaction.

Coder Training

Competent coders require both interpersonal perceptiveness and scientific rigor (Wampler & Harper, 2014). Training thoroughly reviews the theoretical basis for coded constructs and teaches the coding system itself.

Multiple “gold standard” criterion videos demonstrate code ranges that trainees independently apply. Coders then meet weekly to establish reliability of 80% or higher agreement both among themselves and with master criterion coding (Hill & Lambert, 2004).

Ongoing training manages coder drift over time. Revisions to unclear codes may also improve reliability. Both careful selection and investment in rigorous training increase quality control.

Blind Methods

To prevent bias, coders should remain unaware of specific study predictions or participant details (Burghardt et al., 2012). Separate data gathering versus coding teams helps maintain blinding.

Coders should be unaware of study details or participant identities that could bias coding (Burghardt et al., 2012).

Separate teams collecting data versus coding data can reduce bias.

In addition, scheduling procedures can prevent coders from rating data collected directly from participants with whom they have had personal contact. Maintaining coder independence and blinding enhances objectivity.

observation methods

Bakeman, R., & Quera, V. (2017). Sequential analysis and observational methods for the behavioral sciences. Cambridge University Press.

Burghardt, G. M., Bartmess-LeVasseur, J. N., Browning, S. A., Morrison, K. E., Stec, C. L., Zachau, C. E., & Freeberg, T. M. (2012). Minimizing observer bias in behavioral studies: A review and recommendations. Ethology, 118 (6), 511-517.

Hill, C. E., & Lambert, M. J. (2004). Methodological issues in studying psychotherapy processes and outcomes. In M. J. Lambert (Ed.), Bergin and Garfield’s handbook of psychotherapy and behavior change (5th ed., pp. 84–135). Wiley.

Lindahl, K. M. (2001). Methodological issues in family observational research. In P. K. Kerig & K. M. Lindahl (Eds.), Family observational coding systems: Resources for systemic research (pp. 23–32). Lawrence Erlbaum Associates.

Mehl, M. R., Robbins, M. L., & Deters, F. G. (2012). Naturalistic observation of health-relevant social processes: The electronically activated recorder methodology in psychosomatics. Psychosomatic Medicine, 74 (4), 410–417.

Morris, A. S., Robinson, L. R., & Eisenberg, N. (2014). Applying a multimethod perspective to the study of developmental psychology. In H. T. Reis & C. M. Judd (Eds.), Handbook of research methods in social and personality psychology (2nd ed., pp. 103–123). Cambridge University Press.

Smith, J. A., Maxwell, S. D., & Johnson, G. (2014). The microstructure of everyday life: Analyzing the complex choreography of daily routines through the automatic capture and processing of wearable sensor data. In B. K. Wiederhold & G. Riva (Eds.), Annual Review of Cybertherapy and Telemedicine 2014: Positive Change with Technology (Vol. 199, pp. 62-64). IOS Press.

Traniello, J. F., & Bakker, T. C. (2015). The integrative study of behavioral interactions across the sciences. In T. K. Shackelford & R. D. Hansen (Eds.), The evolution of sexuality (pp. 119-147). Springer.

Wampler, K. S., & Harper, A. (2014). Observational methods in couple and family assessment. In H. T. Reis & C. M. Judd (Eds.), Handbook of research methods in social and personality psychology (2nd ed., pp. 490–502). Cambridge University Press.

Print Friendly, PDF & Email

Qualitative study design: Observation

  • Qualitative study design
  • Phenomenology
  • Grounded theory
  • Ethnography
  • Narrative inquiry
  • Action research
  • Case Studies
  • Field research
  • Focus groups

Observation

  • Surveys & questionnaires
  • Study Designs Home

A way to gather data by watching people, events, or noting physical characteristics in their natural setting. Seeks to answer the question: “What is going on here?”.  While rooted in ethnographic research it can be applied to other methodologies. Observations may often be supplemented with interviews.

There are three main categories:

     Participant observation  

  •     Researcher becomes a participant in the culture or context being observed.
  •     Requires researcher to be accepted as part of culture being observed in order for success

    Direct Observation

  •     Researcher strives to be as unobtrusive as possible so as not to bias the observations; must remain detached.
  •     Technology can be useful (i.e. video, audio recording).

     Indirect Observation

  •     Results of an interaction, process or behaviour are observed (for example, measuring the amount of plate waste left by students in a school cafeteria to determine whether a new food is acceptable to them).

Observations may be unstructured, semi-structured or structured.  The latter two involve the use of an observation template that includes prompting questions such as: “What are people doing?”; “What are they trying to accomplish?”; How are they doing this?” etc.

What form does observation take?

    Field notes; audio and video recordings.

  • Allows for insight into contexts, relationships, and behaviours;
  • Can provide information previously unknown to researchers that is crucial for project design, data collection, and interpretation of other data. 

Limitations

  • Not suited to all research inquiries since not all phenomena can be observed.
  • Time-consuming.
  • Documentation relies on memory, personal discipline, and diligence of researcher.
  • Requires conscious effort at objectivity because method is inherently subjective.
  • Critics maintain that different observers will make different observations of the same phenomena so that no single account can be held up as the source of truth. 

Example questions

  • How do members of operating theatres communicate with each other?
  • How do nurses interact with their patients when administering medication?
  • How do parents deal with their adolescent children who suffer chronic pain?

Example studies

  • Bolster, D., & Manias, E. (2010). Person-centred interactions between nurses and patients during medication activities in an acute hospital setting: Qualitative observation and interview study. International Journal of Nursing Studies , 47(2), 154-165. doi: 10.1016/j.ijnurstu.2009.05.021
  • Bombeke, K., De Winter, B., Debaene, L., Van Royen, P., Van Roosbroeck, S., Van Hal, G., & Schol, S. (2011). Medical students trained in communication skills show a decline in patient-centred attitudes: An observational study comparing two cohorts during clinical clerkships . Patient Education and Counseling , 84(3), 310-318. doi: 10.1016/j.pec.2011.03.007
  • Given, L. M. (2008). The SAGE encyclopedia of qualitative research methods (Vols 1-0). Thousand Oaks, CA: SAGE Publications, Inc. doi: 10.4135/9781412963909
  • Holloway, I. & Galvin, K. (2017). Qualitative research in nursing and healthcare (Fourth ed.) John Wiley & Sons Inc.
  • << Previous: Focus groups
  • Next: Documents >>
  • Last Updated: Apr 8, 2024 11:12 AM
  • URL: https://deakin.libguides.com/qualitative-study-designs

Duke University Libraries

Qualitative Research: Observation

  • Getting Started
  • Focus Groups
  • Observation
  • Case Studies
  • Data Collection
  • Cleaning Text
  • Analysis Tools
  • Institutional Review

Participant Observation

structured observation qualitative research

Photo: https://slideplayer.com/slide/4599875/

Field Guide

  • Participant Observation Field Guide

What is an observation?

A way to gather data by watching people, events, or noting physical characteristics in their natural setting. Observations can be overt (subjects know they are being observed) or covert (do not know they are being watched).

  • Researcher becomes a participant in the culture or context being observed.
  • Requires researcher to be accepted as part of culture being observed in order for success

Direct Observation

  • Researcher strives to be as unobtrusive as possible so as not to bias the observations; more detached.
  • Technology can be useful (i.e video, audiorecording).

Indirect Observation

  • Results of an interaction, process or behavior are observed (for example, measuring the amount of plate waste left by students in a school cafeteria to determine whether a new food is acceptable to them).

Suggested Readings and Film

  • Born into Brothels . (2004) Oscar winning documentary, an example of participatory observation, portrays the life of children born to prostitutes in Calcutta. New York-based photographer Zana Briski gave cameras to the children of prostitutes and taught them photography
  • Davies, J. P., & Spencer, D. (2010).  Emotions in the field: The psychology and anthropology of fieldwork experience . Stanford, CA: Stanford University Press.
  • DeWalt, K. M., & DeWalt, B. R. (2011).  Participant observation : A guide for fieldworkers .   Lanham, Md: Rowman & Littlefield.
  • Reinharz, S. (2011).  Observing the observer: Understanding our selves in field research . NY: Oxford University Press.
  • Schensul, J. J., & LeCompte, M. D. (2013).  Essential ethnographic methods: A mixed methods approach . Lanham, MD: AltaMira Press.
  • Skinner, J. (2012).  The interview: An ethnographic approach . NY: Berg.
  • << Previous: Focus Groups
  • Next: Case Studies >>
  • Last Updated: Mar 1, 2024 10:13 AM
  • URL: https://guides.library.duke.edu/qualitative-research

Duke University Libraries

Services for...

  • Faculty & Instructors
  • Graduate Students
  • Undergraduate Students
  • International Students
  • Patrons with Disabilities

Twitter

  • Harmful Language Statement
  • Re-use & Attribution / Privacy
  • Support the Libraries

Creative Commons License

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Published: 05 October 2018

Interviews and focus groups in qualitative research: an update for the digital age

  • P. Gill 1 &
  • J. Baillie 2  

British Dental Journal volume  225 ,  pages 668–672 ( 2018 ) Cite this article

26k Accesses

48 Citations

20 Altmetric

Metrics details

Highlights that qualitative research is used increasingly in dentistry. Interviews and focus groups remain the most common qualitative methods of data collection.

Suggests the advent of digital technologies has transformed how qualitative research can now be undertaken.

Suggests interviews and focus groups can offer significant, meaningful insight into participants' experiences, beliefs and perspectives, which can help to inform developments in dental practice.

Qualitative research is used increasingly in dentistry, due to its potential to provide meaningful, in-depth insights into participants' experiences, perspectives, beliefs and behaviours. These insights can subsequently help to inform developments in dental practice and further related research. The most common methods of data collection used in qualitative research are interviews and focus groups. While these are primarily conducted face-to-face, the ongoing evolution of digital technologies, such as video chat and online forums, has further transformed these methods of data collection. This paper therefore discusses interviews and focus groups in detail, outlines how they can be used in practice, how digital technologies can further inform the data collection process, and what these methods can offer dentistry.

You have full access to this article via your institution.

Similar content being viewed by others

structured observation qualitative research

Interviews in the social sciences

structured observation qualitative research

Professionalism in dentistry: deconstructing common terminology

A review of technical and quality assessment considerations of audio-visual and web-conferencing focus groups in qualitative health research, introduction.

Traditionally, research in dentistry has primarily been quantitative in nature. 1 However, in recent years, there has been a growing interest in qualitative research within the profession, due to its potential to further inform developments in practice, policy, education and training. Consequently, in 2008, the British Dental Journal (BDJ) published a four paper qualitative research series, 2 , 3 , 4 , 5 to help increase awareness and understanding of this particular methodological approach.

Since the papers were originally published, two scoping reviews have demonstrated the ongoing proliferation in the use of qualitative research within the field of oral healthcare. 1 , 6 To date, the original four paper series continue to be well cited and two of the main papers remain widely accessed among the BDJ readership. 2 , 3 The potential value of well-conducted qualitative research to evidence-based practice is now also widely recognised by service providers, policy makers, funding bodies and those who commission, support and use healthcare research.

Besides increasing standalone use, qualitative methods are now also routinely incorporated into larger mixed method study designs, such as clinical trials, as they can offer additional, meaningful insights into complex problems that simply could not be provided by quantitative methods alone. Qualitative methods can also be used to further facilitate in-depth understanding of important aspects of clinical trial processes, such as recruitment. For example, Ellis et al . investigated why edentulous older patients, dissatisfied with conventional dentures, decline implant treatment, despite its established efficacy, and frequently refuse to participate in related randomised clinical trials, even when financial constraints are removed. 7 Through the use of focus groups in Canada and the UK, the authors found that fears of pain and potential complications, along with perceived embarrassment, exacerbated by age, are common reasons why older patients typically refuse dental implants. 7

The last decade has also seen further developments in qualitative research, due to the ongoing evolution of digital technologies. These developments have transformed how researchers can access and share information, communicate and collaborate, recruit and engage participants, collect and analyse data and disseminate and translate research findings. 8 Where appropriate, such technologies are therefore capable of extending and enhancing how qualitative research is undertaken. 9 For example, it is now possible to collect qualitative data via instant messaging, email or online/video chat, using appropriate online platforms.

These innovative approaches to research are therefore cost-effective, convenient, reduce geographical constraints and are often useful for accessing 'hard to reach' participants (for example, those who are immobile or socially isolated). 8 , 9 However, digital technologies are still relatively new and constantly evolving and therefore present a variety of pragmatic and methodological challenges. Furthermore, given their very nature, their use in many qualitative studies and/or with certain participant groups may be inappropriate and should therefore always be carefully considered. While it is beyond the scope of this paper to provide a detailed explication regarding the use of digital technologies in qualitative research, insight is provided into how such technologies can be used to facilitate the data collection process in interviews and focus groups.

In light of such developments, it is perhaps therefore timely to update the main paper 3 of the original BDJ series. As with the previous publications, this paper has been purposely written in an accessible style, to enhance readability, particularly for those who are new to qualitative research. While the focus remains on the most common qualitative methods of data collection – interviews and focus groups – appropriate revisions have been made to provide a novel perspective, and should therefore be helpful to those who would like to know more about qualitative research. This paper specifically focuses on undertaking qualitative research with adult participants only.

Overview of qualitative research

Qualitative research is an approach that focuses on people and their experiences, behaviours and opinions. 10 , 11 The qualitative researcher seeks to answer questions of 'how' and 'why', providing detailed insight and understanding, 11 which quantitative methods cannot reach. 12 Within qualitative research, there are distinct methodologies influencing how the researcher approaches the research question, data collection and data analysis. 13 For example, phenomenological studies focus on the lived experience of individuals, explored through their description of the phenomenon. Ethnographic studies explore the culture of a group and typically involve the use of multiple methods to uncover the issues. 14

While methodology is the 'thinking tool', the methods are the 'doing tools'; 13 the ways in which data are collected and analysed. There are multiple qualitative data collection methods, including interviews, focus groups, observations, documentary analysis, participant diaries, photography and videography. Two of the most commonly used qualitative methods are interviews and focus groups, which are explored in this article. The data generated through these methods can be analysed in one of many ways, according to the methodological approach chosen. A common approach is thematic data analysis, involving the identification of themes and subthemes across the data set. Further information on approaches to qualitative data analysis has been discussed elsewhere. 1

Qualitative research is an evolving and adaptable approach, used by different disciplines for different purposes. Traditionally, qualitative data, specifically interviews, focus groups and observations, have been collected face-to-face with participants. In more recent years, digital technologies have contributed to the ongoing evolution of qualitative research. Digital technologies offer researchers different ways of recruiting participants and collecting data, and offer participants opportunities to be involved in research that is not necessarily face-to-face.

Research interviews are a fundamental qualitative research method 15 and are utilised across methodological approaches. Interviews enable the researcher to learn in depth about the perspectives, experiences, beliefs and motivations of the participant. 3 , 16 Examples include, exploring patients' perspectives of fear/anxiety triggers in dental treatment, 17 patients' experiences of oral health and diabetes, 18 and dental students' motivations for their choice of career. 19

Interviews may be structured, semi-structured or unstructured, 3 according to the purpose of the study, with less structured interviews facilitating a more in depth and flexible interviewing approach. 20 Structured interviews are similar to verbal questionnaires and are used if the researcher requires clarification on a topic; however they produce less in-depth data about a participant's experience. 3 Unstructured interviews may be used when little is known about a topic and involves the researcher asking an opening question; 3 the participant then leads the discussion. 20 Semi-structured interviews are commonly used in healthcare research, enabling the researcher to ask predetermined questions, 20 while ensuring the participant discusses issues they feel are important.

Interviews can be undertaken face-to-face or using digital methods when the researcher and participant are in different locations. Audio-recording the interview, with the consent of the participant, is essential for all interviews regardless of the medium as it enables accurate transcription; the process of turning the audio file into a word-for-word transcript. This transcript is the data, which the researcher then analyses according to the chosen approach.

Types of interview

Qualitative studies often utilise one-to-one, face-to-face interviews with research participants. This involves arranging a mutually convenient time and place to meet the participant, signing a consent form and audio-recording the interview. However, digital technologies have expanded the potential for interviews in research, enabling individuals to participate in qualitative research regardless of location.

Telephone interviews can be a useful alternative to face-to-face interviews and are commonly used in qualitative research. They enable participants from different geographical areas to participate and may be less onerous for participants than meeting a researcher in person. 15 A qualitative study explored patients' perspectives of dental implants and utilised telephone interviews due to the quality of the data that could be yielded. 21 The researcher needs to consider how they will audio record the interview, which can be facilitated by purchasing a recorder that connects directly to the telephone. One potential disadvantage of telephone interviews is the inability of the interviewer and researcher to see each other. This is resolved using software for audio and video calls online – such as Skype – to conduct interviews with participants in qualitative studies. Advantages of this approach include being able to see the participant if video calls are used, enabling observation of non-verbal communication, and the software can be free to use. However, participants are required to have a device and internet connection, as well as being computer literate, potentially limiting who can participate in the study. One qualitative study explored the role of dental hygienists in reducing oral health disparities in Canada. 22 The researcher conducted interviews using Skype, which enabled dental hygienists from across Canada to be interviewed within the research budget, accommodating the participants' schedules. 22

A less commonly used approach to qualitative interviews is the use of social virtual worlds. A qualitative study accessed a social virtual world – Second Life – to explore the health literacy skills of individuals who use social virtual worlds to access health information. 23 The researcher created an avatar and interview room, and undertook interviews with participants using voice and text methods. 23 This approach to recruitment and data collection enables individuals from diverse geographical locations to participate, while remaining anonymous if they wish. Furthermore, for interviews conducted using text methods, transcription of the interview is not required as the researcher can save the written conversation with the participant, with the participant's consent. However, the researcher and participant need to be familiar with how the social virtual world works to engage in an interview this way.

Conducting an interview

Ensuring informed consent before any interview is a fundamental aspect of the research process. Participants in research must be afforded autonomy and respect; consent should be informed and voluntary. 24 Individuals should have the opportunity to read an information sheet about the study, ask questions, understand how their data will be stored and used, and know that they are free to withdraw at any point without reprisal. The qualitative researcher should take written consent before undertaking the interview. In a face-to-face interview, this is straightforward: the researcher and participant both sign copies of the consent form, keeping one each. However, this approach is less straightforward when the researcher and participant do not meet in person. A recent protocol paper outlined an approach for taking consent for telephone interviews, which involved: audio recording the participant agreeing to each point on the consent form; the researcher signing the consent form and keeping a copy; and posting a copy to the participant. 25 This process could be replicated in other interview studies using digital methods.

There are advantages and disadvantages of using face-to-face and digital methods for research interviews. Ultimately, for both approaches, the quality of the interview is determined by the researcher. 16 Appropriate training and preparation are thus required. Healthcare professionals can use their interpersonal communication skills when undertaking a research interview, particularly questioning, listening and conversing. 3 However, the purpose of an interview is to gain information about the study topic, 26 rather than offering help and advice. 3 The researcher therefore needs to listen attentively to participants, enabling them to describe their experience without interruption. 3 The use of active listening skills also help to facilitate the interview. 14 Spradley outlined elements and strategies for research interviews, 27 which are a useful guide for qualitative researchers:

Greeting and explaining the project/interview

Asking descriptive (broad), structural (explore response to descriptive) and contrast (difference between) questions

Asymmetry between the researcher and participant talking

Expressing interest and cultural ignorance

Repeating, restating and incorporating the participant's words when asking questions

Creating hypothetical situations

Asking friendly questions

Knowing when to leave.

For semi-structured interviews, a topic guide (also called an interview schedule) is used to guide the content of the interview – an example of a topic guide is outlined in Box 1 . The topic guide, usually based on the research questions, existing literature and, for healthcare professionals, their clinical experience, is developed by the research team. The topic guide should include open ended questions that elicit in-depth information, and offer participants the opportunity to talk about issues important to them. This is vital in qualitative research where the researcher is interested in exploring the experiences and perspectives of participants. It can be useful for qualitative researchers to pilot the topic guide with the first participants, 10 to ensure the questions are relevant and understandable, and amending the questions if required.

Regardless of the medium of interview, the researcher must consider the setting of the interview. For face-to-face interviews, this could be in the participant's home, in an office or another mutually convenient location. A quiet location is preferable to promote confidentiality, enable the researcher and participant to concentrate on the conversation, and to facilitate accurate audio-recording of the interview. For interviews using digital methods the same principles apply: a quiet, private space where the researcher and participant feel comfortable and confident to participate in an interview.

Box 1: Example of a topic guide

Study focus: Parents' experiences of brushing their child's (aged 0–5) teeth

1. Can you tell me about your experience of cleaning your child's teeth?

How old was your child when you started cleaning their teeth?

Why did you start cleaning their teeth at that point?

How often do you brush their teeth?

What do you use to brush their teeth and why?

2. Could you explain how you find cleaning your child's teeth?

Do you find anything difficult?

What makes cleaning their teeth easier for you?

3. How has your experience of cleaning your child's teeth changed over time?

Has it become easier or harder?

Have you changed how often and how you clean their teeth? If so, why?

4. Could you describe how your child finds having their teeth cleaned?

What do they enjoy about having their teeth cleaned?

Is there anything they find upsetting about having their teeth cleaned?

5. Where do you look for information/advice about cleaning your child's teeth?

What did your health visitor tell you about cleaning your child's teeth? (If anything)

What has the dentist told you about caring for your child's teeth? (If visited)

Have any family members given you advice about how to clean your child's teeth? If so, what did they tell you? Did you follow their advice?

6. Is there anything else you would like to discuss about this?

Focus groups

A focus group is a moderated group discussion on a pre-defined topic, for research purposes. 28 , 29 While not aligned to a particular qualitative methodology (for example, grounded theory or phenomenology) as such, focus groups are used increasingly in healthcare research, as they are useful for exploring collective perspectives, attitudes, behaviours and experiences. Consequently, they can yield rich, in-depth data and illuminate agreement and inconsistencies 28 within and, where appropriate, between groups. Examples include public perceptions of dental implants and subsequent impact on help-seeking and decision making, 30 and general dental practitioners' views on patient safety in dentistry. 31

Focus groups can be used alone or in conjunction with other methods, such as interviews or observations, and can therefore help to confirm, extend or enrich understanding and provide alternative insights. 28 The social interaction between participants often results in lively discussion and can therefore facilitate the collection of rich, meaningful data. However, they are complex to organise and manage, due to the number of participants, and may also be inappropriate for exploring particularly sensitive issues that many participants may feel uncomfortable about discussing in a group environment.

Focus groups are primarily undertaken face-to-face but can now also be undertaken online, using appropriate technologies such as email, bulletin boards, online research communities, chat rooms, discussion forums, social media and video conferencing. 32 Using such technologies, data collection can also be synchronous (for example, online discussions in 'real time') or, unlike traditional face-to-face focus groups, asynchronous (for example, online/email discussions in 'non-real time'). While many of the fundamental principles of focus group research are the same, regardless of how they are conducted, a number of subtle nuances are associated with the online medium. 32 Some of which are discussed further in the following sections.

Focus group considerations

Some key considerations associated with face-to-face focus groups are: how many participants are required; should participants within each group know each other (or not) and how many focus groups are needed within a single study? These issues are much debated and there is no definitive answer. However, the number of focus groups required will largely depend on the topic area, the depth and breadth of data needed, the desired level of participation required 29 and the necessity (or not) for data saturation.

The optimum group size is around six to eight participants (excluding researchers) but can work effectively with between three and 14 participants. 3 If the group is too small, it may limit discussion, but if it is too large, it may become disorganised and difficult to manage. It is, however, prudent to over-recruit for a focus group by approximately two to three participants, to allow for potential non-attenders. For many researchers, particularly novice researchers, group size may also be informed by pragmatic considerations, such as the type of study, resources available and moderator experience. 28 Similar size and mix considerations exist for online focus groups. Typically, synchronous online focus groups will have around three to eight participants but, as the discussion does not happen simultaneously, asynchronous groups may have as many as 10–30 participants. 33

The topic area and potential group interaction should guide group composition considerations. Pre-existing groups, where participants know each other (for example, work colleagues) may be easier to recruit, have shared experiences and may enjoy a familiarity, which facilitates discussion and/or the ability to challenge each other courteously. 3 However, if there is a potential power imbalance within the group or if existing group norms and hierarchies may adversely affect the ability of participants to speak freely, then 'stranger groups' (that is, where participants do not already know each other) may be more appropriate. 34 , 35

Focus group management

Face-to-face focus groups should normally be conducted by two researchers; a moderator and an observer. 28 The moderator facilitates group discussion, while the observer typically monitors group dynamics, behaviours, non-verbal cues, seating arrangements and speaking order, which is essential for transcription and analysis. The same principles of informed consent, as discussed in the interview section, also apply to focus groups, regardless of medium. However, the consent process for online discussions will probably be managed somewhat differently. For example, while an appropriate participant information leaflet (and consent form) would still be required, the process is likely to be managed electronically (for example, via email) and would need to specifically address issues relating to technology (for example, anonymity and use, storage and access to online data). 32

The venue in which a face to face focus group is conducted should be of a suitable size, private, quiet, free from distractions and in a collectively convenient location. It should also be conducted at a time appropriate for participants, 28 as this is likely to promote attendance. As with interviews, the same ethical considerations apply (as discussed earlier). However, online focus groups may present additional ethical challenges associated with issues such as informed consent, appropriate access and secure data storage. Further guidance can be found elsewhere. 8 , 32

Before the focus group commences, the researchers should establish rapport with participants, as this will help to put them at ease and result in a more meaningful discussion. Consequently, researchers should introduce themselves, provide further clarity about the study and how the process will work in practice and outline the 'ground rules'. Ground rules are designed to assist, not hinder, group discussion and typically include: 3 , 28 , 29

Discussions within the group are confidential to the group

Only one person can speak at a time

All participants should have sufficient opportunity to contribute

There should be no unnecessary interruptions while someone is speaking

Everyone can be expected to be listened to and their views respected

Challenging contrary opinions is appropriate, but ridiculing is not.

Moderating a focus group requires considered management and good interpersonal skills to help guide the discussion and, where appropriate, keep it sufficiently focused. Avoid, therefore, participating, leading, expressing personal opinions or correcting participants' knowledge 3 , 28 as this may bias the process. A relaxed, interested demeanour will also help participants to feel comfortable and promote candid discourse. Moderators should also prevent the discussion being dominated by any one person, ensure differences of opinions are discussed fairly and, if required, encourage reticent participants to contribute. 3 Asking open questions, reflecting on significant issues, inviting further debate, probing responses accordingly, and seeking further clarification, as and where appropriate, will help to obtain sufficient depth and insight into the topic area.

Moderating online focus groups requires comparable skills, particularly if the discussion is synchronous, as the discussion may be dominated by those who can type proficiently. 36 It is therefore important that sufficient time and respect is accorded to those who may not be able to type as quickly. Asynchronous discussions are usually less problematic in this respect, as interactions are less instant. However, moderating an asynchronous discussion presents additional challenges, particularly if participants are geographically dispersed, as they may be online at different times. Consequently, the moderator will not always be present and the discussion may therefore need to occur over several days, which can be difficult to manage and facilitate and invariably requires considerable flexibility. 32 It is also worth recognising that establishing rapport with participants via online medium is often more challenging than via face-to-face and may therefore require additional time, skills, effort and consideration.

As with research interviews, focus groups should be guided by an appropriate interview schedule, as discussed earlier in the paper. For example, the schedule will usually be informed by the review of the literature and study aims, and will merely provide a topic guide to help inform subsequent discussions. To provide a verbatim account of the discussion, focus groups must be recorded, using an audio-recorder with a good quality multi-directional microphone. While videotaping is possible, some participants may find it obtrusive, 3 which may adversely affect group dynamics. The use (or not) of a video recorder, should therefore be carefully considered.

At the end of the focus group, a few minutes should be spent rounding up and reflecting on the discussion. 28 Depending on the topic area, it is possible that some participants may have revealed deeply personal issues and may therefore require further help and support, such as a constructive debrief or possibly even referral on to a relevant third party. It is also possible that some participants may feel that the discussion did not adequately reflect their views and, consequently, may no longer wish to be associated with the study. 28 Such occurrences are likely to be uncommon, but should they arise, it is important to further discuss any concerns and, if appropriate, offer them the opportunity to withdraw (including any data relating to them) from the study. Immediately after the discussion, researchers should compile notes regarding thoughts and ideas about the focus group, which can assist with data analysis and, if appropriate, any further data collection.

Qualitative research is increasingly being utilised within dental research to explore the experiences, perspectives, motivations and beliefs of participants. The contributions of qualitative research to evidence-based practice are increasingly being recognised, both as standalone research and as part of larger mixed-method studies, including clinical trials. Interviews and focus groups remain commonly used data collection methods in qualitative research, and with the advent of digital technologies, their utilisation continues to evolve. However, digital methods of qualitative data collection present additional methodological, ethical and practical considerations, but also potentially offer considerable flexibility to participants and researchers. Consequently, regardless of format, qualitative methods have significant potential to inform important areas of dental practice, policy and further related research.

Gussy M, Dickson-Swift V, Adams J . A scoping review of qualitative research in peer-reviewed dental publications. Int J Dent Hygiene 2013; 11 : 174–179.

Article   Google Scholar  

Burnard P, Gill P, Stewart K, Treasure E, Chadwick B . Analysing and presenting qualitative data. Br Dent J 2008; 204 : 429–432.

Gill P, Stewart K, Treasure E, Chadwick B . Methods of data collection in qualitative research: interviews and focus groups. Br Dent J 2008; 204 : 291–295.

Gill P, Stewart K, Treasure E, Chadwick B . Conducting qualitative interviews with school children in dental research. Br Dent J 2008; 204 : 371–374.

Stewart K, Gill P, Chadwick B, Treasure E . Qualitative research in dentistry. Br Dent J 2008; 204 : 235–239.

Masood M, Thaliath E, Bower E, Newton J . An appraisal of the quality of published qualitative dental research. Community Dent Oral Epidemiol 2011; 39 : 193–203.

Ellis J, Levine A, Bedos C et al. Refusal of implant supported mandibular overdentures by elderly patients. Gerodontology 2011; 28 : 62–68.

Macfarlane S, Bucknall T . Digital Technologies in Research. In Gerrish K, Lathlean J (editors) The Research Process in Nursing . 7th edition. pp. 71–86. Oxford: Wiley Blackwell; 2015.

Google Scholar  

Lee R, Fielding N, Blank G . Online Research Methods in the Social Sciences: An Editorial Introduction. In Fielding N, Lee R, Blank G (editors) The Sage Handbook of Online Research Methods . pp. 3–16. London: Sage Publications; 2016.

Creswell J . Qualitative inquiry and research design: Choosing among five designs . Thousand Oaks, CA: Sage, 1998.

Guest G, Namey E, Mitchell M . Qualitative research: Defining and designing In Guest G, Namey E, Mitchell M (editors) Collecting Qualitative Data: A Field Manual For Applied Research . pp. 1–40. London: Sage Publications, 2013.

Chapter   Google Scholar  

Pope C, Mays N . Qualitative research: Reaching the parts other methods cannot reach: an introduction to qualitative methods in health and health services research. BMJ 1995; 311 : 42–45.

Giddings L, Grant B . A Trojan Horse for positivism? A critique of mixed methods research. Adv Nurs Sci 2007; 30 : 52–60.

Hammersley M, Atkinson P . Ethnography: Principles in Practice . London: Routledge, 1995.

Oltmann S . Qualitative interviews: A methodological discussion of the interviewer and respondent contexts Forum Qualitative Sozialforschung/Forum: Qualitative Social Research. 2016; 17 : Art. 15.

Patton M . Qualitative Research and Evaluation Methods . Thousand Oaks, CA: Sage, 2002.

Wang M, Vinall-Collier K, Csikar J, Douglas G . A qualitative study of patients' views of techniques to reduce dental anxiety. J Dent 2017; 66 : 45–51.

Lindenmeyer A, Bowyer V, Roscoe J, Dale J, Sutcliffe P . Oral health awareness and care preferences in patients with diabetes: a qualitative study. Fam Pract 2013; 30 : 113–118.

Gallagher J, Clarke W, Wilson N . Understanding the motivation: a qualitative study of dental students' choice of professional career. Eur J Dent Educ 2008; 12 : 89–98.

Tod A . Interviewing. In Gerrish K, Lacey A (editors) The Research Process in Nursing . Oxford: Blackwell Publishing, 2006.

Grey E, Harcourt D, O'Sullivan D, Buchanan H, Kipatrick N . A qualitative study of patients' motivations and expectations for dental implants. Br Dent J 2013; 214 : 10.1038/sj.bdj.2012.1178.

Farmer J, Peressini S, Lawrence H . Exploring the role of the dental hygienist in reducing oral health disparities in Canada: A qualitative study. Int J Dent Hygiene 2017; 10.1111/idh.12276.

McElhinney E, Cheater F, Kidd L . Undertaking qualitative health research in social virtual worlds. J Adv Nurs 2013; 70 : 1267–1275.

Health Research Authority. UK Policy Framework for Health and Social Care Research. Available at https://www.hra.nhs.uk/planning-and-improving-research/policies-standards-legislation/uk-policy-framework-health-social-care-research/ (accessed September 2017).

Baillie J, Gill P, Courtenay P . Knowledge, understanding and experiences of peritonitis among patients, and their families, undertaking peritoneal dialysis: A mixed methods study protocol. J Adv Nurs 2017; 10.1111/jan.13400.

Kvale S . Interviews . Thousand Oaks (CA): Sage, 1996.

Spradley J . The Ethnographic Interview . New York: Holt, Rinehart and Winston, 1979.

Goodman C, Evans C . Focus Groups. In Gerrish K, Lathlean J (editors) The Research Process in Nursing . pp. 401–412. Oxford: Wiley Blackwell, 2015.

Shaha M, Wenzell J, Hill E . Planning and conducting focus group research with nurses. Nurse Res 2011; 18 : 77–87.

Wang G, Gao X, Edward C . Public perception of dental implants: a qualitative study. J Dent 2015; 43 : 798–805.

Bailey E . Contemporary views of dental practitioners' on patient safety. Br Dent J 2015; 219 : 535–540.

Abrams K, Gaiser T . Online Focus Groups. In Field N, Lee R, Blank G (editors) The Sage Handbook of Online Research Methods . pp. 435–450. London: Sage Publications, 2016.

Poynter R . The Handbook of Online and Social Media Research . West Sussex: John Wiley & Sons, 2010.

Kevern J, Webb C . Focus groups as a tool for critical social research in nurse education. Nurse Educ Today 2001; 21 : 323–333.

Kitzinger J, Barbour R . Introduction: The Challenge and Promise of Focus Groups. In Barbour R S K J (editor) Developing Focus Group Research . pp. 1–20. London: Sage Publications, 1999.

Krueger R, Casey M . Focus Groups: A Practical Guide for Applied Research. 4th ed. Thousand Oaks, California: SAGE; 2009.

Download references

Author information

Authors and affiliations.

Senior Lecturer (Adult Nursing), School of Healthcare Sciences, Cardiff University,

Lecturer (Adult Nursing) and RCBC Wales Postdoctoral Research Fellow, School of Healthcare Sciences, Cardiff University,

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to P. Gill .

Rights and permissions

Reprints and permissions

About this article

Cite this article.

Gill, P., Baillie, J. Interviews and focus groups in qualitative research: an update for the digital age. Br Dent J 225 , 668–672 (2018). https://doi.org/10.1038/sj.bdj.2018.815

Download citation

Accepted : 02 July 2018

Published : 05 October 2018

Issue Date : 12 October 2018

DOI : https://doi.org/10.1038/sj.bdj.2018.815

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

This article is cited by

Translating brand reputation into equity from the stakeholder’s theory: an approach to value creation based on consumer’s perception & interactions.

  • Olukorede Adewole

International Journal of Corporate Social Responsibility (2024)

Perceptions and beliefs of community gatekeepers about genomic risk information in African cleft research

  • Abimbola M. Oladayo
  • Oluwakemi Odukoya
  • Azeez Butali

BMC Public Health (2024)

Assessment of women’s needs, wishes and preferences regarding interprofessional guidance on nutrition in pregnancy – a qualitative study

  • Merle Ebinghaus
  • Caroline Johanna Agricola
  • Birgit-Christiane Zyriax

BMC Pregnancy and Childbirth (2024)

‘Baby mamas’ in Urban Ghana: an exploratory qualitative study on the factors influencing serial fathering among men in Accra, Ghana

  • Rosemond Akpene Hiadzi
  • Jemima Akweley Agyeman
  • Godwin Banafo Akrong

Reproductive Health (2023)

Revolutionising dental technologies: a qualitative study on dental technicians’ perceptions of Artificial intelligence integration

  • Galvin Sim Siang Lin
  • Yook Shiang Ng
  • Kah Hoay Chua

BMC Oral Health (2023)

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

structured observation qualitative research

IMAGES

  1. 5 Qualitative Research Methods Every UX Researcher Should Know [+ Examples]

    structured observation qualitative research

  2. Understanding Qualitative Research: An In-Depth Study Guide

    structured observation qualitative research

  3. 18 Qualitative Research Examples (2024)

    structured observation qualitative research

  4. Qualitative Research: Definition, Types, Methods and Examples

    structured observation qualitative research

  5. What is Qualitative Observation? Definition, Types, Examples and Best

    structured observation qualitative research

  6. Structured observation as research_method

    structured observation qualitative research

VIDEO

  1. Qualitative Research & Observation Method by Prof. Raksha Singh, IGNTU, Amarkantak

  2. Structured and Unstructured Observation

  3. Observation in Research Method in Urdu & Hindi

  4. Folknographic Clip: Astute Observation as Qualitative Data Collection Technique

  5. Focused Observations, Chapter 2 Observation Practice #4 (Captions)

  6. "Unlocking the Art of Detailing: Mastering the Process"

COMMENTS

  1. How to use and assess qualitative research methods

    The methods of qualitative data collection most commonly used in health research are document study, observations, semi-structured interviews and focus groups [1, 14, 16, 17]. Document study These can include personal and non-personal documents such as archives, annual reports, guidelines, policy documents, diaries or letters.

  2. RESEARCH FORUM: Structured observation: How it works

    Structured observation is a qualitative research methodology that has been used by the social sciences for several years. It is a methodology in which an event or series of events is observed in its natural setting and recorded by an independent researcher. The observations are structured in the sense that pre-determined categories are used to ...

  3. PDF Structured Methods: Interviews, Questionnaires and Observation

    182 DOING RESEARCH Learning how to design and use structured interviews, questionnaires and observation instruments is an important skill for research-ers. Such survey instruments can be used in many types of research, from case study, to cross-sectional survey, to experiment. A study of this sort can involve anything from a short

  4. Qualitative Study

    Qualitative research uses techniques including structured and unstructured interviews, focus groups, and participant observation to not only help generate hypotheses which can be more rigorously tested with quantitative research but also to help researchers delve deeper into the quantitative research numbers, understand what they mean, and ...

  5. Qualitative research method-interviewing and observation

    Qualitative research method-interviewing and observation. Buckley and Chiang define research methodology as "a strategy or architectural design by which the researcher maps out an approach to problem-finding or problem-solving.". [ 1] According to Crotty, research methodology is a comprehensive strategy 'that silhouettes our choice and ...

  6. What Is Qualitative Observation?

    Qualitative observation is a type of observational study, often used in conjunction with other types of research through triangulation. It is often used in fields like social sciences, education, healthcare, marketing, and design. This type of study is especially well suited for gaining rich and detailed insights into complex and/or subjective ...

  7. Planning Qualitative Research: Design and Decision Making for New

    While many books and articles guide various qualitative research methods and analyses, there is currently no concise resource that explains and differentiates among the most common qualitative approaches. We believe novice qualitative researchers, students planning the design of a qualitative study or taking an introductory qualitative research course, and faculty teaching such courses can ...

  8. Observational Research

    The data that are collected in observational research studies are often qualitative in nature but they may also be quantitative or both (mixed-methods). There are several different types of observational methods that will be described below. ... Structured Observation. Another observational method is structured observation. Here the ...

  9. Observations in Qualitative Inquiry: When What You See Is Not What You

    Observation in qualitative research "is one of the oldest and most fundamental research methods approaches. This approach involves collecting data using one's senses, especially looking and listening in a systematic and meaningful way" (McKechnie, 2008, p. 573).Similarly, Adler and Adler (1994) characterized observations as the "fundamental base of all research methods" in the social ...

  10. Chapter 13. Participant Observation

    The more an observer, the more likely the researcher will engage in relatively structured interviews (using techniques discussed in chapters 11 and 12); the more a participant, the more likely casual conversations or "unstructured interviews" will form the core of the data collected. [2] Observation and Qualitative Traditions

  11. What Is Qualitative Research?

    Qualitative research methods. Each of the research approaches involve using one or more data collection methods.These are some of the most common qualitative methods: Observations: recording what you have seen, heard, or encountered in detailed field notes. Interviews: personally asking people questions in one-on-one conversations. Focus groups: asking questions and generating discussion among ...

  12. Observation (Chapter 7)

    Two broad types of observation are described in the general research methods literature: structured and unstructured observation. Structured observation samples a predetermined event or activity, using a prearranged instrument or form into whose categories the observer records whether specific activities take place, when and how often. This ...

  13. What Is an Observational Study?

    Revised on June 22, 2023. An observational study is used to answer a research question based purely on what the researcher observes. There is no interference or manipulation of the research subjects, and no control and treatment groups. These studies are often qualitative in nature and can be used for both exploratory and explanatory research ...

  14. The 3 Cs of Content, Context, and Concepts: A Practical Approach to

    The 3 Cs approach to unstructured field observations can be used when observation is the primary research method 46 or in tandem with another research method, such as qualitative interviews. 51 In the Qatar recruitment study, the observations were conducted more with the intent of being supplemental, but ultimately served as the primary source ...

  15. 6.5 Observational Research

    As with other qualitative methods, a variety of different methods and tools can be used to collect information on the case. For instance, interviews, naturalistic observation, structured observation, psychological testing (e.g., IQ test), and/or physiological measurements (e.g., brain scans) may be used to collect information on the individual.

  16. Direct observation methods: A practical guide for health researchers

    In developing research using observation, the first step is determining if observation is appropriate. Observation is ideal for studies about naturally occurring behaviors, actions, or events. ... and McCullough et al [4] for examples on how to include semi-structured, qualitative observation data in a manuscript and Waisel et al [17] and Kuhn ...

  17. Structured Interview

    Revised on June 22, 2023. A structured interview is a data collection method that relies on asking questions in a set order to collect data on a topic. It is one of four types of interviews. In research, structured interviews are often quantitative in nature. They can also be used in qualitative research if the questions are open-ended, but ...

  18. Structured Qualitative Research: Organizing "Mountains of Words" for

    Qualitative research creates mountains of words. U.S. federal funding supports mostly structured qualitative research, which is designed to test hypotheses using semi-quantitative coding and analysis. The authors have 30 years of experience in designing and completing major qualitative research projects, mainly funded by the US National Institute on Drug Abuse [NIDA].

  19. Observation Methods: Naturalistic, Participant and Controlled

    Controlled observation is a research method for studying behavior in a carefully controlled and structured environment. The researcher sets specific conditions, variables, and procedures to systematically observe and measure behavior, allowing for greater control and comparison of different conditions or groups.

  20. Observation

    There are three main categories: Participant observation. Researcher becomes a participant in the culture or context being observed. Direct Observation. Researcher strives to be as unobtrusive as possible so as not to bias the observations; must remain detached. Technology can be useful (i.e. video, audio recording). Indirect Observation.

  21. Observation

    A way to gather data by watching people, events, or noting physical characteristics in their natural setting. Observations can be overt (subjects know they are being observed) or covert (do not know they are being watched). Participant Observation. Researcher becomes a participant in the culture or context being observed.

  22. Interviews and focus groups in qualitative research: an update for the

    Qualitative research is an approach that focuses on people and their experiences, behaviours and opinions. 10,11 The qualitative researcher seeks to answer questions of 'how' and 'why', providing ...