• Bipolar Disorder
  • Therapy Center
  • When To See a Therapist
  • Types of Therapy
  • Best Online Therapy
  • Best Couples Therapy
  • Best Family Therapy
  • Managing Stress
  • Sleep and Dreaming
  • Understanding Emotions
  • Self-Improvement
  • Healthy Relationships
  • Student Resources
  • Personality Types
  • Guided Meditations
  • Verywell Mind Insights
  • 2023 Verywell Mind 25
  • Mental Health in the Classroom
  • Editorial Process
  • Meet Our Review Board
  • Crisis Support

The Use of Self-Report Data in Psychology

  • Disadvantages
  • Other Data Sources

How to Create a Self-Report Study

In psychology, a self-report is any test, measure, or survey that relies on an individual's own report of their symptoms, behaviors, beliefs, or attitudes. Self-report data is gathered typically in paper-and-pencil or electronic format or sometimes through an interview.

Self-reporting is commonly used in psychological studies because it can yield valuable and diagnostic information to a researcher or a clinician.

This article explores examples of how self-report data is used in psychology. It also covers the advantages and disadvantages of this approach.

Examples of Self-Reports

To understand how self-reports are used in psychology, it can be helpful to look at some examples. Some many well-known assessments and inventories rely on self-reporting to collect data.

One of the most commonly used self-report tools is the  Minnesota Multiphasic Personality Inventory (MMPI) for personality testing . This inventory includes more than 500 questions focused on different areas, including behaviors, psychological health, interpersonal relationships, and attitudes. It is often used as a mental health assessment, but it is also used in legal cases, custody evaluations, and as a screening instrument for some careers.

The 16 Personality Factor (PF) Questionnaire

This personality inventory is often used as a diagnostic tool to help therapists plan treatment. It can be used to learn more about various individual characteristics, including empathy, openness, attitudes, attachment quality, and coping style.

Myers-Briggs Type Indicator (MBTI)

The MBTI is a popular personality measure that describes personality types in four categories: introversion or extraversion, sensing or intuiting, thinking or feeling, and judging or perceiving. A letter is taken from each category to describe a person's personality type, such as INTP or ESFJ.

Personality inventories and psychology assessments often utilize self-reporting for data collection. Examples include the MMPI, the 16PF Questionnaire, and the MBTI.

Advantages of Self-Report Data

One of the primary advantages of self-reporting is that it can be easy to obtain. It is also an important way that clinicians diagnose their patients—by asking questions. Those making the self-report are usually familiar with filling out questionnaires.

For research, it is inexpensive and can reach many more test subjects than could be analyzed by observation or other methods. It can be performed relatively quickly, so a researcher can obtain results in days or weeks rather than observing a population over the course of a longer time frame.

Self-reports can be made in private and can be anonymized to protect sensitive information and perhaps promote truthful responses.

Disadvantages of Self-Report Data

Collecting information through a self-reporting has limitations. People are often biased when they report on their own experiences. For example, many individuals are either consciously or unconsciously influenced by "social desirability." That is, they are more likely to report experiences that are considered to be socially acceptable or preferred.

Self-reports are subject to these biases and limitations:

  • Honesty : Subjects may make the more socially acceptable answer rather than being truthful.
  • Introspective ability : The subjects may not be able to assess themselves accurately.
  • Interpretation of questions : The wording of the questions may be confusing or have different meanings to different subjects.
  • Rating scales : Rating something yes or no can be too restrictive, but numerical scales also can be inexact and subject to individual inclination to give an extreme or middle response to all questions.
  • Response bias : Questions are subject to all of the biases of what the previous responses were, whether they relate to recent or significant experience and other factors.
  • Sampling bias : The people who complete the questionnaire are the sort of people who will complete a questionnaire. Are they representative of the population you wish to study?

Self-Report Info With Other Data

Most experts in psychological research and diagnosis suggest that self-report data should not be used alone, as it tends to be biased. Research is best done when combining self-reporting with other information, such as an individual’s behavior or physiological data.

This “multi-modal” or “multi-method” assessment provides a more global, and therefore more likely accurate, picture of the subject.

The questionnaires used in research should be checked to see if they produce consistent results over time. They also should be validated by another data method demonstrating that responses measure what they claim they measure. Questionnaires and responses should be easy to discriminate between controls and the test group.

If you are creating a self-report tool for psychology research, there are a few key steps you should follow. First, decide what type of data you want to collect. This will determine the format of your questions and the type of scale you use. 

Next, create a pool of questions that are clear and concise. The goal is to have several items that cover all the topics you wish to address. Finally, pilot your study with a small group to ensure it is valid and reliable.

When creating a self-report study, determine what information you need to collect and test the assessment with a group of individuals to determine if the instrument is reliable.

Self-reporting can be a useful tool for collecting data. The benefits of self-report data include lower costs and the ability to collect data from a large number of people. However, self-report data can also be biased and prone to errors.

Levin-Aspenson HF, Watson D. Mode of administration effects in psychopathology assessment: Analyses of gender, age, and education differences in self-rated versus interview-based depression . Psychol Assess. 2018;30(3):287-295. doi:10.1037/pas0000474

Tarescavage AM, Ben-Porath YS. Examination of the feasibility and utility of flexible and conditional administration of the Minnesota Multiphasic Personality Inventory-2-Restructured Form . Psychol Assess. 2017;29(11):1337-1348. doi:10.1037/pas0000442

Warner CH, Appenzeller GN, Grieger T, et al. Importance of anonymity to encourage honest reporting in mental health screening after combat deployment . Arch Gen Psychiatry . 2011;68(10):1065-1071. doi:10.1001/archgenpsychiatry.2011.112

Devaux M, Sassi F. Social disparities in hazardous alcohol use: Self-report bias may lead to incorrect estimates . Eur J Public Health . 2016;26(1):129-134. doi:10.1093/eurpub/ckv190

Althubaiti A. Information bias in health research: Definition, pitfalls, and adjustment methods . J Multidiscip Healthc . 2016;9:211-217. doi:10.2147/JMDH.S104807

Hopwood CJ, Good EW, Morey LC. Validity of the DSM-5 Levels of Personality Functioning Scale-Self Report . J Pers Assess. 2018;100(6):650-659. doi:10.1080/00223891.2017.1420660

By Kristalyn Salters-Pedneault, PhD  Kristalyn Salters-Pedneault, PhD, is a clinical psychologist and associate professor of psychology at Eastern Connecticut State University.

  • Search Menu
  • Browse content in Arts and Humanities
  • Browse content in Archaeology
  • Anglo-Saxon and Medieval Archaeology
  • Archaeological Methodology and Techniques
  • Archaeology by Region
  • Archaeology of Religion
  • Archaeology of Trade and Exchange
  • Biblical Archaeology
  • Contemporary and Public Archaeology
  • Environmental Archaeology
  • Historical Archaeology
  • History and Theory of Archaeology
  • Industrial Archaeology
  • Landscape Archaeology
  • Mortuary Archaeology
  • Prehistoric Archaeology
  • Underwater Archaeology
  • Urban Archaeology
  • Zooarchaeology
  • Browse content in Architecture
  • Architectural Structure and Design
  • History of Architecture
  • Residential and Domestic Buildings
  • Theory of Architecture
  • Browse content in Art
  • Art Subjects and Themes
  • History of Art
  • Industrial and Commercial Art
  • Theory of Art
  • Biographical Studies
  • Byzantine Studies
  • Browse content in Classical Studies
  • Classical History
  • Classical Philosophy
  • Classical Mythology
  • Classical Literature
  • Classical Reception
  • Classical Art and Architecture
  • Classical Oratory and Rhetoric
  • Greek and Roman Epigraphy
  • Greek and Roman Law
  • Greek and Roman Archaeology
  • Greek and Roman Papyrology
  • Late Antiquity
  • Religion in the Ancient World
  • Digital Humanities
  • Browse content in History
  • Colonialism and Imperialism
  • Diplomatic History
  • Environmental History
  • Genealogy, Heraldry, Names, and Honours
  • Genocide and Ethnic Cleansing
  • Historical Geography
  • History by Period
  • History of Agriculture
  • History of Education
  • History of Emotions
  • History of Gender and Sexuality
  • Industrial History
  • Intellectual History
  • International History
  • Labour History
  • Legal and Constitutional History
  • Local and Family History
  • Maritime History
  • Military History
  • National Liberation and Post-Colonialism
  • Oral History
  • Political History
  • Public History
  • Regional and National History
  • Revolutions and Rebellions
  • Slavery and Abolition of Slavery
  • Social and Cultural History
  • Theory, Methods, and Historiography
  • Urban History
  • World History
  • Browse content in Language Teaching and Learning
  • Language Learning (Specific Skills)
  • Language Teaching Theory and Methods
  • Browse content in Linguistics
  • Applied Linguistics
  • Cognitive Linguistics
  • Computational Linguistics
  • Forensic Linguistics
  • Grammar, Syntax and Morphology
  • Historical and Diachronic Linguistics
  • History of English
  • Language Acquisition
  • Language Variation
  • Language Families
  • Language Evolution
  • Language Reference
  • Lexicography
  • Linguistic Theories
  • Linguistic Typology
  • Linguistic Anthropology
  • Phonetics and Phonology
  • Psycholinguistics
  • Sociolinguistics
  • Translation and Interpretation
  • Writing Systems
  • Browse content in Literature
  • Bibliography
  • Children's Literature Studies
  • Literary Studies (Asian)
  • Literary Studies (European)
  • Literary Studies (Eco-criticism)
  • Literary Studies (Modernism)
  • Literary Studies (Romanticism)
  • Literary Studies (American)
  • Literary Studies - World
  • Literary Studies (1500 to 1800)
  • Literary Studies (19th Century)
  • Literary Studies (20th Century onwards)
  • Literary Studies (African American Literature)
  • Literary Studies (British and Irish)
  • Literary Studies (Early and Medieval)
  • Literary Studies (Fiction, Novelists, and Prose Writers)
  • Literary Studies (Gender Studies)
  • Literary Studies (Graphic Novels)
  • Literary Studies (History of the Book)
  • Literary Studies (Plays and Playwrights)
  • Literary Studies (Poetry and Poets)
  • Literary Studies (Postcolonial Literature)
  • Literary Studies (Queer Studies)
  • Literary Studies (Science Fiction)
  • Literary Studies (Travel Literature)
  • Literary Studies (War Literature)
  • Literary Studies (Women's Writing)
  • Literary Theory and Cultural Studies
  • Mythology and Folklore
  • Shakespeare Studies and Criticism
  • Browse content in Media Studies
  • Browse content in Music
  • Applied Music
  • Dance and Music
  • Ethics in Music
  • Ethnomusicology
  • Gender and Sexuality in Music
  • Medicine and Music
  • Music Cultures
  • Music and Religion
  • Music and Culture
  • Music and Media
  • Music Education and Pedagogy
  • Music Theory and Analysis
  • Musical Scores, Lyrics, and Libretti
  • Musical Structures, Styles, and Techniques
  • Musicology and Music History
  • Performance Practice and Studies
  • Race and Ethnicity in Music
  • Sound Studies
  • Browse content in Performing Arts
  • Browse content in Philosophy
  • Aesthetics and Philosophy of Art
  • Epistemology
  • Feminist Philosophy
  • History of Western Philosophy
  • Metaphysics
  • Moral Philosophy
  • Non-Western Philosophy
  • Philosophy of Science
  • Philosophy of Action
  • Philosophy of Law
  • Philosophy of Religion
  • Philosophy of Language
  • Philosophy of Mind
  • Philosophy of Perception
  • Philosophy of Mathematics and Logic
  • Practical Ethics
  • Social and Political Philosophy
  • Browse content in Religion
  • Biblical Studies
  • Christianity
  • East Asian Religions
  • History of Religion
  • Judaism and Jewish Studies
  • Qumran Studies
  • Religion and Education
  • Religion and Health
  • Religion and Politics
  • Religion and Science
  • Religion and Law
  • Religion and Art, Literature, and Music
  • Religious Studies
  • Browse content in Society and Culture
  • Cookery, Food, and Drink
  • Cultural Studies
  • Customs and Traditions
  • Ethical Issues and Debates
  • Hobbies, Games, Arts and Crafts
  • Lifestyle, Home, and Garden
  • Natural world, Country Life, and Pets
  • Popular Beliefs and Controversial Knowledge
  • Sports and Outdoor Recreation
  • Technology and Society
  • Travel and Holiday
  • Visual Culture
  • Browse content in Law
  • Arbitration
  • Browse content in Company and Commercial Law
  • Commercial Law
  • Company Law
  • Browse content in Comparative Law
  • Systems of Law
  • Competition Law
  • Browse content in Constitutional and Administrative Law
  • Government Powers
  • Judicial Review
  • Local Government Law
  • Military and Defence Law
  • Parliamentary and Legislative Practice
  • Construction Law
  • Contract Law
  • Browse content in Criminal Law
  • Criminal Procedure
  • Criminal Evidence Law
  • Sentencing and Punishment
  • Employment and Labour Law
  • Environment and Energy Law
  • Browse content in Financial Law
  • Banking Law
  • Insolvency Law
  • History of Law
  • Human Rights and Immigration
  • Intellectual Property Law
  • Browse content in International Law
  • Private International Law and Conflict of Laws
  • Public International Law
  • IT and Communications Law
  • Jurisprudence and Philosophy of Law
  • Law and Politics
  • Law and Society
  • Browse content in Legal System and Practice
  • Courts and Procedure
  • Legal Skills and Practice
  • Primary Sources of Law
  • Regulation of Legal Profession
  • Medical and Healthcare Law
  • Browse content in Policing
  • Criminal Investigation and Detection
  • Police and Security Services
  • Police Procedure and Law
  • Police Regional Planning
  • Browse content in Property Law
  • Personal Property Law
  • Study and Revision
  • Terrorism and National Security Law
  • Browse content in Trusts Law
  • Wills and Probate or Succession
  • Browse content in Medicine and Health
  • Browse content in Allied Health Professions
  • Arts Therapies
  • Clinical Science
  • Dietetics and Nutrition
  • Occupational Therapy
  • Operating Department Practice
  • Physiotherapy
  • Radiography
  • Speech and Language Therapy
  • Browse content in Anaesthetics
  • General Anaesthesia
  • Neuroanaesthesia
  • Browse content in Clinical Medicine
  • Acute Medicine
  • Cardiovascular Medicine
  • Clinical Genetics
  • Clinical Pharmacology and Therapeutics
  • Dermatology
  • Endocrinology and Diabetes
  • Gastroenterology
  • Genito-urinary Medicine
  • Geriatric Medicine
  • Infectious Diseases
  • Medical Oncology
  • Medical Toxicology
  • Pain Medicine
  • Palliative Medicine
  • Rehabilitation Medicine
  • Respiratory Medicine and Pulmonology
  • Rheumatology
  • Sleep Medicine
  • Sports and Exercise Medicine
  • Clinical Neuroscience
  • Community Medical Services
  • Critical Care
  • Emergency Medicine
  • Forensic Medicine
  • Haematology
  • History of Medicine
  • Browse content in Medical Dentistry
  • Oral and Maxillofacial Surgery
  • Paediatric Dentistry
  • Restorative Dentistry and Orthodontics
  • Surgical Dentistry
  • Medical Ethics
  • Browse content in Medical Skills
  • Clinical Skills
  • Communication Skills
  • Nursing Skills
  • Surgical Skills
  • Medical Statistics and Methodology
  • Browse content in Neurology
  • Clinical Neurophysiology
  • Neuropathology
  • Nursing Studies
  • Browse content in Obstetrics and Gynaecology
  • Gynaecology
  • Occupational Medicine
  • Ophthalmology
  • Otolaryngology (ENT)
  • Browse content in Paediatrics
  • Neonatology
  • Browse content in Pathology
  • Chemical Pathology
  • Clinical Cytogenetics and Molecular Genetics
  • Histopathology
  • Medical Microbiology and Virology
  • Patient Education and Information
  • Browse content in Pharmacology
  • Psychopharmacology
  • Browse content in Popular Health
  • Caring for Others
  • Complementary and Alternative Medicine
  • Self-help and Personal Development
  • Browse content in Preclinical Medicine
  • Cell Biology
  • Molecular Biology and Genetics
  • Reproduction, Growth and Development
  • Primary Care
  • Professional Development in Medicine
  • Browse content in Psychiatry
  • Addiction Medicine
  • Child and Adolescent Psychiatry
  • Forensic Psychiatry
  • Learning Disabilities
  • Old Age Psychiatry
  • Psychotherapy
  • Browse content in Public Health and Epidemiology
  • Epidemiology
  • Public Health
  • Browse content in Radiology
  • Clinical Radiology
  • Interventional Radiology
  • Nuclear Medicine
  • Radiation Oncology
  • Reproductive Medicine
  • Browse content in Surgery
  • Cardiothoracic Surgery
  • Gastro-intestinal and Colorectal Surgery
  • General Surgery
  • Neurosurgery
  • Paediatric Surgery
  • Peri-operative Care
  • Plastic and Reconstructive Surgery
  • Surgical Oncology
  • Transplant Surgery
  • Trauma and Orthopaedic Surgery
  • Vascular Surgery
  • Browse content in Science and Mathematics
  • Browse content in Biological Sciences
  • Aquatic Biology
  • Biochemistry
  • Bioinformatics and Computational Biology
  • Developmental Biology
  • Ecology and Conservation
  • Evolutionary Biology
  • Genetics and Genomics
  • Microbiology
  • Molecular and Cell Biology
  • Natural History
  • Plant Sciences and Forestry
  • Research Methods in Life Sciences
  • Structural Biology
  • Systems Biology
  • Zoology and Animal Sciences
  • Browse content in Chemistry
  • Analytical Chemistry
  • Computational Chemistry
  • Crystallography
  • Environmental Chemistry
  • Industrial Chemistry
  • Inorganic Chemistry
  • Materials Chemistry
  • Medicinal Chemistry
  • Mineralogy and Gems
  • Organic Chemistry
  • Physical Chemistry
  • Polymer Chemistry
  • Study and Communication Skills in Chemistry
  • Theoretical Chemistry
  • Browse content in Computer Science
  • Artificial Intelligence
  • Computer Architecture and Logic Design
  • Game Studies
  • Human-Computer Interaction
  • Mathematical Theory of Computation
  • Programming Languages
  • Software Engineering
  • Systems Analysis and Design
  • Virtual Reality
  • Browse content in Computing
  • Business Applications
  • Computer Security
  • Computer Games
  • Computer Networking and Communications
  • Digital Lifestyle
  • Graphical and Digital Media Applications
  • Operating Systems
  • Browse content in Earth Sciences and Geography
  • Atmospheric Sciences
  • Environmental Geography
  • Geology and the Lithosphere
  • Maps and Map-making
  • Meteorology and Climatology
  • Oceanography and Hydrology
  • Palaeontology
  • Physical Geography and Topography
  • Regional Geography
  • Soil Science
  • Urban Geography
  • Browse content in Engineering and Technology
  • Agriculture and Farming
  • Biological Engineering
  • Civil Engineering, Surveying, and Building
  • Electronics and Communications Engineering
  • Energy Technology
  • Engineering (General)
  • Environmental Science, Engineering, and Technology
  • History of Engineering and Technology
  • Mechanical Engineering and Materials
  • Technology of Industrial Chemistry
  • Transport Technology and Trades
  • Browse content in Environmental Science
  • Applied Ecology (Environmental Science)
  • Conservation of the Environment (Environmental Science)
  • Environmental Sustainability
  • Environmentalist Thought and Ideology (Environmental Science)
  • Management of Land and Natural Resources (Environmental Science)
  • Natural Disasters (Environmental Science)
  • Nuclear Issues (Environmental Science)
  • Pollution and Threats to the Environment (Environmental Science)
  • Social Impact of Environmental Issues (Environmental Science)
  • History of Science and Technology
  • Browse content in Materials Science
  • Ceramics and Glasses
  • Composite Materials
  • Metals, Alloying, and Corrosion
  • Nanotechnology
  • Browse content in Mathematics
  • Applied Mathematics
  • Biomathematics and Statistics
  • History of Mathematics
  • Mathematical Education
  • Mathematical Finance
  • Mathematical Analysis
  • Numerical and Computational Mathematics
  • Probability and Statistics
  • Pure Mathematics
  • Browse content in Neuroscience
  • Cognition and Behavioural Neuroscience
  • Development of the Nervous System
  • Disorders of the Nervous System
  • History of Neuroscience
  • Invertebrate Neurobiology
  • Molecular and Cellular Systems
  • Neuroendocrinology and Autonomic Nervous System
  • Neuroscientific Techniques
  • Sensory and Motor Systems
  • Browse content in Physics
  • Astronomy and Astrophysics
  • Atomic, Molecular, and Optical Physics
  • Biological and Medical Physics
  • Classical Mechanics
  • Computational Physics
  • Condensed Matter Physics
  • Electromagnetism, Optics, and Acoustics
  • History of Physics
  • Mathematical and Statistical Physics
  • Measurement Science
  • Nuclear Physics
  • Particles and Fields
  • Plasma Physics
  • Quantum Physics
  • Relativity and Gravitation
  • Semiconductor and Mesoscopic Physics
  • Browse content in Psychology
  • Affective Sciences
  • Clinical Psychology
  • Cognitive Neuroscience
  • Cognitive Psychology
  • Criminal and Forensic Psychology
  • Developmental Psychology
  • Educational Psychology
  • Evolutionary Psychology
  • Health Psychology
  • History and Systems in Psychology
  • Music Psychology
  • Neuropsychology
  • Organizational Psychology
  • Psychological Assessment and Testing
  • Psychology of Human-Technology Interaction
  • Psychology Professional Development and Training
  • Research Methods in Psychology
  • Social Psychology
  • Browse content in Social Sciences
  • Browse content in Anthropology
  • Anthropology of Religion
  • Human Evolution
  • Medical Anthropology
  • Physical Anthropology
  • Regional Anthropology
  • Social and Cultural Anthropology
  • Theory and Practice of Anthropology
  • Browse content in Business and Management
  • Business Strategy
  • Business History
  • Business Ethics
  • Business and Government
  • Business and Technology
  • Business and the Environment
  • Comparative Management
  • Corporate Governance
  • Corporate Social Responsibility
  • Entrepreneurship
  • Health Management
  • Human Resource Management
  • Industrial and Employment Relations
  • Industry Studies
  • Information and Communication Technologies
  • International Business
  • Knowledge Management
  • Management and Management Techniques
  • Operations Management
  • Organizational Theory and Behaviour
  • Pensions and Pension Management
  • Public and Nonprofit Management
  • Strategic Management
  • Supply Chain Management
  • Browse content in Criminology and Criminal Justice
  • Criminal Justice
  • Criminology
  • Forms of Crime
  • International and Comparative Criminology
  • Youth Violence and Juvenile Justice
  • Development Studies
  • Browse content in Economics
  • Agricultural, Environmental, and Natural Resource Economics
  • Asian Economics
  • Behavioural Finance
  • Behavioural Economics and Neuroeconomics
  • Econometrics and Mathematical Economics
  • Economic Systems
  • Economic Methodology
  • Economic History
  • Economic Development and Growth
  • Financial Markets
  • Financial Institutions and Services
  • General Economics and Teaching
  • Health, Education, and Welfare
  • History of Economic Thought
  • International Economics
  • Labour and Demographic Economics
  • Law and Economics
  • Macroeconomics and Monetary Economics
  • Microeconomics
  • Public Economics
  • Urban, Rural, and Regional Economics
  • Welfare Economics
  • Browse content in Education
  • Adult Education and Continuous Learning
  • Care and Counselling of Students
  • Early Childhood and Elementary Education
  • Educational Equipment and Technology
  • Educational Strategies and Policy
  • Higher and Further Education
  • Organization and Management of Education
  • Philosophy and Theory of Education
  • Schools Studies
  • Secondary Education
  • Teaching of a Specific Subject
  • Teaching of Specific Groups and Special Educational Needs
  • Teaching Skills and Techniques
  • Browse content in Environment
  • Applied Ecology (Social Science)
  • Climate Change
  • Conservation of the Environment (Social Science)
  • Environmentalist Thought and Ideology (Social Science)
  • Natural Disasters (Environment)
  • Social Impact of Environmental Issues (Social Science)
  • Browse content in Human Geography
  • Cultural Geography
  • Economic Geography
  • Political Geography
  • Browse content in Interdisciplinary Studies
  • Communication Studies
  • Museums, Libraries, and Information Sciences
  • Browse content in Politics
  • African Politics
  • Asian Politics
  • Chinese Politics
  • Comparative Politics
  • Conflict Politics
  • Elections and Electoral Studies
  • Environmental Politics
  • European Union
  • Foreign Policy
  • Gender and Politics
  • Human Rights and Politics
  • Indian Politics
  • International Relations
  • International Organization (Politics)
  • International Political Economy
  • Irish Politics
  • Latin American Politics
  • Middle Eastern Politics
  • Political Methodology
  • Political Communication
  • Political Philosophy
  • Political Sociology
  • Political Theory
  • Political Behaviour
  • Political Economy
  • Political Institutions
  • Politics and Law
  • Public Administration
  • Public Policy
  • Quantitative Political Methodology
  • Regional Political Studies
  • Russian Politics
  • Security Studies
  • State and Local Government
  • UK Politics
  • US Politics
  • Browse content in Regional and Area Studies
  • African Studies
  • Asian Studies
  • East Asian Studies
  • Japanese Studies
  • Latin American Studies
  • Middle Eastern Studies
  • Native American Studies
  • Scottish Studies
  • Browse content in Research and Information
  • Research Methods
  • Browse content in Social Work
  • Addictions and Substance Misuse
  • Adoption and Fostering
  • Care of the Elderly
  • Child and Adolescent Social Work
  • Couple and Family Social Work
  • Developmental and Physical Disabilities Social Work
  • Direct Practice and Clinical Social Work
  • Emergency Services
  • Human Behaviour and the Social Environment
  • International and Global Issues in Social Work
  • Mental and Behavioural Health
  • Social Justice and Human Rights
  • Social Policy and Advocacy
  • Social Work and Crime and Justice
  • Social Work Macro Practice
  • Social Work Practice Settings
  • Social Work Research and Evidence-based Practice
  • Welfare and Benefit Systems
  • Browse content in Sociology
  • Childhood Studies
  • Community Development
  • Comparative and Historical Sociology
  • Economic Sociology
  • Gender and Sexuality
  • Gerontology and Ageing
  • Health, Illness, and Medicine
  • Marriage and the Family
  • Migration Studies
  • Occupations, Professions, and Work
  • Organizations
  • Population and Demography
  • Race and Ethnicity
  • Social Theory
  • Social Movements and Social Change
  • Social Research and Statistics
  • Social Stratification, Inequality, and Mobility
  • Sociology of Religion
  • Sociology of Education
  • Sport and Leisure
  • Urban and Rural Studies
  • Browse content in Warfare and Defence
  • Defence Strategy, Planning, and Research
  • Land Forces and Warfare
  • Military Administration
  • Military Life and Institutions
  • Naval Forces and Warfare
  • Other Warfare and Defence Issues
  • Peace Studies and Conflict Resolution
  • Weapons and Equipment

Evidence-Based Outcome Research: A practical guide to conducting randomized controlled trials for psychosocial interventions

  • < Previous chapter
  • Next chapter >

5. Self-Report Measures

  • Published: September 2007
  • Cite Icon Cite
  • Permissions Icon Permissions

Chapter 5 explores self-report (SR) measures in treatment research. It discusses types of SRs, quality of SRs (reliability, validity, sensitivity and specificity in classification, utility), selecting SR measures for outcome research, and response distortions.

Signed in as

Institutional accounts.

  • GoogleCrawler [DO NOT DELETE]
  • Google Scholar Indexing

Personal account

  • Sign in with email/username & password
  • Get email alerts
  • Save searches
  • Purchase content
  • Activate your purchase/trial code

Institutional access

  • Sign in with a library card Sign in with username/password Recommend to your librarian
  • Institutional account management
  • Get help with access

Access to content on Oxford Academic is often provided through institutional subscriptions and purchases. If you are a member of an institution with an active account, you may be able to access content in one of the following ways:

IP based access

Typically, access is provided across an institutional network to a range of IP addresses. This authentication occurs automatically, and it is not possible to sign out of an IP authenticated account.

Sign in through your institution

Choose this option to get remote access when outside your institution. Shibboleth/Open Athens technology is used to provide single sign-on between your institution’s website and Oxford Academic.

  • Click Sign in through your institution.
  • Select your institution from the list provided, which will take you to your institution's website to sign in.
  • When on the institution site, please use the credentials provided by your institution. Do not use an Oxford Academic personal account.
  • Following successful sign in, you will be returned to Oxford Academic.

If your institution is not listed or you cannot sign in to your institution’s website, please contact your librarian or administrator.

Sign in with a library card

Enter your library card number to sign in. If you cannot sign in, please contact your librarian.

Society Members

Society member access to a journal is achieved in one of the following ways:

Sign in through society site

Many societies offer single sign-on between the society website and Oxford Academic. If you see ‘Sign in through society site’ in the sign in pane within a journal:

  • Click Sign in through society site.
  • When on the society site, please use the credentials provided by that society. Do not use an Oxford Academic personal account.

If you do not have a society account or have forgotten your username or password, please contact your society.

Sign in using a personal account

Some societies use Oxford Academic personal accounts to provide access to their members. See below.

A personal account can be used to get email alerts, save searches, purchase content, and activate subscriptions.

Some societies use Oxford Academic personal accounts to provide access to their members.

Viewing your signed in accounts

Click the account icon in the top right to:

  • View your signed in personal account and access account management features.
  • View the institutional accounts that are providing access.

Signed in but can't access content

Oxford Academic is home to a wide variety of products. The institutional subscription may not cover the content that you are trying to access. If you believe you should have access to that content, please contact your librarian.

For librarians and administrators, your personal account also provides access to institutional account management. Here you will find options to view and activate subscriptions, manage institutional settings and access options, access usage statistics, and more.

Our books are available by subscription or purchase to libraries and institutions.

  • About Oxford Academic
  • Publish journals with us
  • University press partners
  • What we publish
  • New features  
  • Open access
  • Rights and permissions
  • Accessibility
  • Advertising
  • Media enquiries
  • Oxford University Press
  • Oxford Languages
  • University of Oxford

Oxford University Press is a department of the University of Oxford. It furthers the University's objective of excellence in research, scholarship, and education by publishing worldwide

  • Copyright © 2024 Oxford University Press
  • Cookie settings
  • Cookie policy
  • Privacy policy
  • Legal notice

This Feature Is Available To Subscribers Only

Sign In or Create an Account

This PDF is available to Subscribers Only

For full access to this pdf, sign in to an existing account, or purchase an annual subscription.

  • Subject List
  • Take a Tour
  • For Authors
  • Subscriber Services
  • Publications
  • African American Studies
  • African Studies
  • American Literature
  • Anthropology
  • Architecture Planning and Preservation
  • Art History
  • Atlantic History
  • Biblical Studies
  • British and Irish Literature
  • Childhood Studies
  • Chinese Studies
  • Cinema and Media Studies
  • Communication
  • Criminology
  • Environmental Science
  • Evolutionary Biology
  • International Law
  • International Relations
  • Islamic Studies
  • Jewish Studies
  • Latin American Studies
  • Latino Studies
  • Linguistics
  • Literary and Critical Theory
  • Medieval Studies
  • Military History
  • Political Science
  • Public Health
  • Renaissance and Reformation
  • Social Work
  • Urban Studies
  • Victorian Literature
  • Browse All Subjects

How to Subscribe

  • Free Trials

In This Article Expand or collapse the "in this article" section Self-Report Tests, Measures, and Inventories in Clinical Psychology

Introduction, background information.

  • Broadband Measures
  • Internalizing Measures
  • Stress and Trauma Measures
  • Eating/Body Image Problems Measures
  • Obsessive-Compulsive Measures
  • Externalizing Measures
  • Thought Dysfunction Measures
  • Somatic Problems Measures
  • Interpersonal Functioning Measures

Related Articles Expand or collapse the "related articles" section about

About related articles close popup.

Lorem Ipsum Sit Dolor Amet

Vestibulum ante ipsum primis in faucibus orci luctus et ultrices posuere cubilia Curae; Aliquam ligula odio, euismod ut aliquam et, vestibulum nec risus. Nulla viverra, arcu et iaculis consequat, justo diam ornare tellus, semper ultrices tellus nunc eu tellus.

  • Diagnostic and Statistical Manual of Mental Disorders (DSM)
  • Implicit Association Test (IAT)
  • Personality and Health
  • Personality Psychology
  • Testing and Assessment

Other Subject Areas

Forthcoming articles expand or collapse the "forthcoming articles" section.

  • Data Visualization
  • Remote Work
  • Workforce Training Evaluation
  • Find more forthcoming articles...
  • Export Citations
  • Share This Facebook LinkedIn Twitter

Self-Report Tests, Measures, and Inventories in Clinical Psychology by Anthony Tarescavage LAST MODIFIED: 29 November 2022 DOI: 10.1093/obo/9780199828340-0300

There are thousands of psychological tests that rely on test-takers’ reports of themselves to measure their standing on psychological constructs of interest. This annotated bibliography on self-report inventories delineates over fifty of these self-report measures. Specifically, this review includes some of the most relevant self-report assessments in the major measurement domains of personality and psychopathology. All can be administered in the traditional paper-and-pencil format. Pertinent background information on using and evaluating these tests is described next.

Ben-Porath 2012 and Costa and McCrae 2009 provide historical background information regarding the measurement of personality and psychopathology. Cronbach and Meehl 1955 , Pedhazur and Schmelkin 1991a , Pedhazur and Schmelkin 1991b , Reynolds and Ramsay 2003 , and Kane 2006 describe the major considerations for evaluating the reliability and validity of self-report tests. American Educational Research Association, et al. 2014 describes best practices and ethical guidelines for using psychological tests. Finally, Lee, et al. 2017 describes how to map self-report inventories onto modern dimensional models of personality and psychopathology.

American Educational Research Association, American Psychological Association, National Council on Measurement in Education, and Joint Committee on Standards for Educational and Psychological Testing. 2014. Standards for educational and psychological testing . Washington, DC: American Educational Research Association.

These standards provide information on the foundations of psychological testing, best practices in operations, and information on applying testing information.

Ben-Porath, Y. S. 2012. Self-report inventories: Assessing personality and psychopathology. In Handbook of psychology . Vol. 10, Assessment psychology . 2d ed. Edited by J. R. Graham and J. A. Naglieri, 622–644. Hoboken, NJ: John Wiley & Sons, Inc.

This comprehensive chapter discusses the origins of self-report measures of personality and psychopathology, criticisms of self-report inventories and responses, and the common threats to protocol validity ( nonresponding, content-based invalid responding, over-reporting, and under-reporting).

Costa, P. T., and R. R. McCrae. 2009. The five-factor model and the NEO inventories. In Oxford handbook of personality assessment . Edited by J. N. Butcher, 299–322. New York: Oxford Univ. Press.

This chapter describes the most prominent model of personality, the five-factor model of personality. It also describes the NEO inventories that are most commonly used to measure this model.

Cronbach, L. J., and P. E. Meehl. 1955. Construct validity in psychological tests. Psychological Bulletin 52.4: 281–303.

DOI: 10.1037/h0040957

This seminal work provides background on one of the three primary areas of validity evidence—construct validity.

Kane, M. 2006. Content-related validity evidence in test development. In Handbook of test development . Edited by S. M. Downing and T. M. Haladyna, 131–153. Hillsdale, NJ: Lawrence Erlbaum.

This chapter provides background information on the third of three primary areas of validity evidence—content validity.

Lee, T. C., M. Sellbom, and C. J. Hopwood. 2017. Contemporary psychopathology assessment: Mapping major personality inventories onto empirical models of psychopathology. In Neuropsychological assessment in the age of evidence-based practice: Diagnostic and treatment evaluations . Edited by S. C. Bowden, 65–94. New York: Oxford Univ. Press.

This chapter discusses how self-report inventories can be used to assess contemporary dimensional models of psychopathology.

Pedhazur, E. J., and L. P. Schmelkin. 1991a. Criterion-related validation. In Measurement, design and analysis: An integrated approach . By E. J. Pedhazur and L. P. Schmelkin, 30–51. Hillsdale, NJ: Lawrence Erlbaum.

This chapter provides background information on the second of three primary areas of validity evidence—criterion validity.

Pedhazur, E. J., and L. P. Schmelkin. 1991b. Reliability. In Measurement, Design and Analysis: An Integrated Approach . By E. J. Pedhazur and L. P. Schmelkin, 81–117. Hillsdale, NJ: Lawrence Erlbaum and Associates.

This chapter provides background on evaluating the reliability of psychological tests.

Reynolds, C. R., and M. C. Ramsay. 2003. Bias in Psychological Assessment: An Empirical Review and Recommendations. In Handbook of psychology . Vol. 10, Assessment psychology . Edited by J. R. Graham and J. A. Naglieri, 67–93. Hoboken, NJ: John Wiley & Sons, Inc.

This chapter provides an overview of research on bias in psychological testing, including a review of possible explanations for mean score differences across demographic groups.

back to top

Users without a subscription are not able to see the full content on this page. Please subscribe or login .

Oxford Bibliographies Online is available by subscription and perpetual access to institutions. For more information or to contact an Oxford Sales Representative click here .

  • About Psychology »
  • Meet the Editorial Board »
  • Abnormal Psychology
  • Academic Assessment
  • Acculturation and Health
  • Action Regulation Theory
  • Action Research
  • Addictive Behavior
  • Adolescence
  • Adoption, Social, Psychological, and Evolutionary Perspect...
  • Advanced Theory of Mind
  • Affective Forecasting
  • Affirmative Action
  • Ageism at Work
  • Allport, Gordon
  • Alzheimer’s Disease
  • Ambulatory Assessment in Behavioral Science
  • Analysis of Covariance (ANCOVA)
  • Animal Behavior
  • Animal Learning
  • Anxiety Disorders
  • Art and Aesthetics, Psychology of
  • Artificial Intelligence, Machine Learning, and Psychology
  • Assessment and Clinical Applications of Individual Differe...
  • Attachment in Social and Emotional Development across the ...
  • Attention-Deficit/Hyperactivity Disorder (ADHD) in Adults
  • Attention-Deficit/Hyperactivity Disorder (ADHD) in Childre...
  • Attitudinal Ambivalence
  • Attraction in Close Relationships
  • Attribution Theory
  • Authoritarian Personality
  • Bayesian Statistical Methods in Psychology
  • Behavior Therapy, Rational Emotive
  • Behavioral Economics
  • Behavioral Genetics
  • Belief Perseverance
  • Bereavement and Grief
  • Biological Psychology
  • Birth Order
  • Body Image in Men and Women
  • Bystander Effect
  • Categorical Data Analysis in Psychology
  • Childhood and Adolescence, Peer Victimization and Bullying...
  • Clark, Mamie Phipps
  • Clinical Neuropsychology
  • Clinical Psychology
  • Cognitive Consistency Theories
  • Cognitive Dissonance Theory
  • Cognitive Neuroscience
  • Communication, Nonverbal Cues and
  • Comparative Psychology
  • Competence to Stand Trial: Restoration Services
  • Competency to Stand Trial
  • Computational Psychology
  • Conflict Management in the Workplace
  • Conformity, Compliance, and Obedience
  • Consciousness
  • Coping Processes
  • Correspondence Analysis in Psychology
  • Counseling Psychology
  • Creativity at Work
  • Critical Thinking
  • Cross-Cultural Psychology
  • Cultural Psychology
  • Daily Life, Research Methods for Studying
  • Data Science Methods for Psychology
  • Data Sharing in Psychology
  • Death and Dying
  • Deceiving and Detecting Deceit
  • Defensive Processes
  • Depressive Disorders
  • Development, Prenatal
  • Developmental Psychology (Cognitive)
  • Developmental Psychology (Social)
  • Diagnostic and Statistical Manual of Mental Disorders (DSM...
  • Discrimination
  • Dissociative Disorders
  • Drugs and Behavior
  • Eating Disorders
  • Ecological Psychology
  • Educational Settings, Assessment of Thinking in
  • Effect Size
  • Embodiment and Embodied Cognition
  • Emerging Adulthood
  • Emotional Intelligence
  • Empathy and Altruism
  • Employee Stress and Well-Being
  • Environmental Neuroscience and Environmental Psychology
  • Ethics in Psychological Practice
  • Event Perception
  • Evolutionary Psychology
  • Expansive Posture
  • Experimental Existential Psychology
  • Exploratory Data Analysis
  • Eyewitness Testimony
  • Eysenck, Hans
  • Factor Analysis
  • Festinger, Leon
  • Five-Factor Model of Personality
  • Flynn Effect, The
  • Forensic Psychology
  • Forgiveness
  • Friendships, Children's
  • Fundamental Attribution Error/Correspondence Bias
  • Gambler's Fallacy
  • Game Theory and Psychology
  • Geropsychology, Clinical
  • Global Mental Health
  • Habit Formation and Behavior Change
  • Health Psychology
  • Health Psychology Research and Practice, Measurement in
  • Heider, Fritz
  • Heuristics and Biases
  • History of Psychology
  • Human Factors
  • Humanistic Psychology
  • Industrial and Organizational Psychology
  • Inferential Statistics in Psychology
  • Insanity Defense, The
  • Intelligence
  • Intelligence, Crystallized and Fluid
  • Intercultural Psychology
  • Intergroup Conflict
  • International Classification of Diseases and Related Healt...
  • International Psychology
  • Interviewing in Forensic Settings
  • Intimate Partner Violence, Psychological Perspectives on
  • Introversion–Extraversion
  • Item Response Theory
  • Law, Psychology and
  • Lazarus, Richard
  • Learned Helplessness
  • Learning Theory
  • Learning versus Performance
  • LGBTQ+ Romantic Relationships
  • Lie Detection in a Forensic Context
  • Life-Span Development
  • Locus of Control
  • Loneliness and Health
  • Mathematical Psychology
  • Meaning in Life
  • Mechanisms and Processes of Peer Contagion
  • Media Violence, Psychological Perspectives on
  • Mediation Analysis
  • Memories, Autobiographical
  • Memories, Flashbulb
  • Memories, Repressed and Recovered
  • Memory, False
  • Memory, Human
  • Memory, Implicit versus Explicit
  • Memory in Educational Settings
  • Memory, Semantic
  • Meta-Analysis
  • Metacognition
  • Metaphor, Psychological Perspectives on
  • Microaggressions
  • Military Psychology
  • Mindfulness
  • Mindfulness and Education
  • Minnesota Multiphasic Personality Inventory (MMPI)
  • Money, Psychology of
  • Moral Conviction
  • Moral Development
  • Moral Psychology
  • Moral Reasoning
  • Nature versus Nurture Debate in Psychology
  • Neuroscience of Associative Learning
  • Nonergodicity in Psychology and Neuroscience
  • Nonparametric Statistical Analysis in Psychology
  • Observational (Non-Randomized) Studies
  • Obsessive-Complusive Disorder (OCD)
  • Occupational Health Psychology
  • Olfaction, Human
  • Operant Conditioning
  • Optimism and Pessimism
  • Organizational Justice
  • Parenting Stress
  • Parenting Styles
  • Parents' Beliefs about Children
  • Path Models
  • Peace Psychology
  • Perception, Person
  • Performance Appraisal
  • Personality Disorders
  • Phenomenological Psychology
  • Placebo Effects in Psychology
  • Play Behavior
  • Positive Psychological Capital (PsyCap)
  • Positive Psychology
  • Posttraumatic Stress Disorder (PTSD)
  • Prejudice and Stereotyping
  • Pretrial Publicity
  • Prisoner's Dilemma
  • Problem Solving and Decision Making
  • Procrastination
  • Prosocial Behavior
  • Prosocial Spending and Well-Being
  • Protocol Analysis
  • Psycholinguistics
  • Psychological Literacy
  • Psychological Perspectives on Food and Eating
  • Psychology, Political
  • Psychoneuroimmunology
  • Psychophysics, Visual
  • Psychotherapy
  • Psychotic Disorders
  • Publication Bias in Psychology
  • Reasoning, Counterfactual
  • Rehabilitation Psychology
  • Relationships
  • Reliability–Contemporary Psychometric Conceptions
  • Religion, Psychology and
  • Replication Initiatives in Psychology
  • Research Methods
  • Risk Taking
  • Role of the Expert Witness in Forensic Psychology, The
  • Sample Size Planning for Statistical Power and Accurate Es...
  • Schizophrenic Disorders
  • School Psychology
  • School Psychology, Counseling Services in
  • Self, Gender and
  • Self, Psychology of the
  • Self-Construal
  • Self-Control
  • Self-Deception
  • Self-Determination Theory
  • Self-Efficacy
  • Self-Esteem
  • Self-Monitoring
  • Self-Regulation in Educational Settings
  • Self-Report Tests, Measures, and Inventories in Clinical P...
  • Sensation Seeking
  • Sex and Gender
  • Sexual Minority Parenting
  • Sexual Orientation
  • Signal Detection Theory and its Applications
  • Simpson's Paradox in Psychology
  • Single People
  • Single-Case Experimental Designs
  • Skinner, B.F.
  • Sleep and Dreaming
  • Small Groups
  • Social Class and Social Status
  • Social Cognition
  • Social Neuroscience
  • Social Support
  • Social Touch and Massage Therapy Research
  • Somatoform Disorders
  • Spatial Attention
  • Sports Psychology
  • Stanford Prison Experiment (SPE): Icon and Controversy
  • Stereotype Threat
  • Stereotypes
  • Stress and Coping, Psychology of
  • Student Success in College
  • Subjective Wellbeing Homeostasis
  • Taste, Psychological Perspectives on
  • Teaching of Psychology
  • Terror Management Theory
  • The Concept of Validity in Psychological Assessment
  • The Neuroscience of Emotion Regulation
  • The Reasoned Action Approach and the Theories of Reasoned ...
  • The Weapon Focus Effect in Eyewitness Memory
  • Theory of Mind
  • Therapies, Person-Centered
  • Therapy, Cognitive-Behavioral
  • Thinking Skills in Educational Settings
  • Time Perception
  • Trait Perspective
  • Trauma Psychology
  • Twin Studies
  • Type A Behavior Pattern (Coronary Prone Personality)
  • Unconscious Processes
  • Video Games and Violent Content
  • Virtues and Character Strengths
  • Women and Science, Technology, Engineering, and Math (STEM...
  • Women, Psychology of
  • Work Well-Being
  • Wundt, Wilhelm
  • Privacy Policy
  • Cookie Policy
  • Legal Notice
  • Accessibility

Powered by:

  • [66.249.64.20|185.80.150.64]
  • 185.80.150.64

Northwestern Scholars Logo

  • Help & FAQ

Assessing psychological well-being: Self-report instruments for the NIH Toolbox

  • Medical Social Sciences
  • Psychiatry and Behavioral Sciences

Research output : Contribution to journal › Article › peer-review

Objective: Psychological well-being (PWB) has a significant relationship with physical and mental health. As a part of the NIH Toolbox for the Assessment of Neurological and Behavioral Function, we developed self-report item banks and short forms to assess PWB. Study design and setting: Expert feedback and literature review informed the selection of PWB concepts and the development of item pools for positive affect, life satisfaction, and meaning and purpose. Items were tested with a community-dwelling US Internet panel sample of adults aged 18 and above (N = 552). Classical and item response theory (IRT) approaches were used to evaluate unidimensionality, fit of items to the overall measure, and calibrations of those items, including differential item function (DIF). Results: IRT-calibrated item banks were produced for positive affect (34 items), life satisfaction (16 items), and meaning and purpose (18 items). Their psychometric properties were supported based on the results of factor analysis, fit statistics, and DIF evaluation. All banks measured the concepts precisely (reliability ≥0.90) for more than 98 % of participants. Conclusion: These adult scales and item banks for PWB provide the flexibility, efficiency, and precision necessary to promote future epidemiological, observational, and intervention research on the relationship of PWB with physical and mental health.

  • Life satisfaction
  • Positive affect
  • Psychological assessment

ASJC Scopus subject areas

  • Public Health, Environmental and Occupational Health

This output contributes to the following UN Sustainable Development Goals (SDGs)

Access to Document

  • 10.1007/s11136-013-0452-3

Other files and links

  • Link to publication in Scopus
  • Link to the citations in Scopus

Fingerprint

  • banks INIS 100%
  • Self-Report Psychology 100%
  • Psychological Well-Being Psychology 100%
  • Positive Affect Psychology 33%
  • Mental Health Psychology 33%
  • Life Satisfaction Psychology 33%
  • evaluation INIS 25%
  • psychological behavior INIS 25%

T1 - Assessing psychological well-being

T2 - Self-report instruments for the NIH Toolbox

AU - Salsman, John M.

AU - Lai, Jin Shei

AU - Hendrie, Hugh C.

AU - Butt, Zeeshan

AU - Zill, Nicholas

AU - Pilkonis, Paul A.

AU - Peterson, Christopher

AU - Stoney, Catherine M.

AU - Brouwers, Pim

AU - Cella, David

N1 - Funding Information: Acknowledgments This project was funded in whole or in part with federal funds from the Blueprint for Neuroscience Research and the Office of Behavioral and Social Sciences Research, National Institutes of Health, under Contract No. HHS-N-260-2006-00007-C. Preparation of this manuscript was supported in part by NIH grants KL2RR025740 from the National Center for Research Resources and 5K07CA158008-01A1 from the National Cancer Institute. The authors would like to thank the subdomain consultants, Felicia Huppert, Ph.D., Alice Carter, Ph.D., Marianne Brady, Ph.D., Dilip Jeste, MD, Colin Depp, Ph.D., and Bruce Cuthbert, Ph.D., and members of the NIH project team, Gitanjali Taneja, Ph.D., and Sarah Knox, Ph.D., who provided critical and constructive expertise during the development of the NIH Toolbox Emotion measurement battery.

PY - 2014/2

Y1 - 2014/2

N2 - Objective: Psychological well-being (PWB) has a significant relationship with physical and mental health. As a part of the NIH Toolbox for the Assessment of Neurological and Behavioral Function, we developed self-report item banks and short forms to assess PWB. Study design and setting: Expert feedback and literature review informed the selection of PWB concepts and the development of item pools for positive affect, life satisfaction, and meaning and purpose. Items were tested with a community-dwelling US Internet panel sample of adults aged 18 and above (N = 552). Classical and item response theory (IRT) approaches were used to evaluate unidimensionality, fit of items to the overall measure, and calibrations of those items, including differential item function (DIF). Results: IRT-calibrated item banks were produced for positive affect (34 items), life satisfaction (16 items), and meaning and purpose (18 items). Their psychometric properties were supported based on the results of factor analysis, fit statistics, and DIF evaluation. All banks measured the concepts precisely (reliability ≥0.90) for more than 98 % of participants. Conclusion: These adult scales and item banks for PWB provide the flexibility, efficiency, and precision necessary to promote future epidemiological, observational, and intervention research on the relationship of PWB with physical and mental health.

AB - Objective: Psychological well-being (PWB) has a significant relationship with physical and mental health. As a part of the NIH Toolbox for the Assessment of Neurological and Behavioral Function, we developed self-report item banks and short forms to assess PWB. Study design and setting: Expert feedback and literature review informed the selection of PWB concepts and the development of item pools for positive affect, life satisfaction, and meaning and purpose. Items were tested with a community-dwelling US Internet panel sample of adults aged 18 and above (N = 552). Classical and item response theory (IRT) approaches were used to evaluate unidimensionality, fit of items to the overall measure, and calibrations of those items, including differential item function (DIF). Results: IRT-calibrated item banks were produced for positive affect (34 items), life satisfaction (16 items), and meaning and purpose (18 items). Their psychometric properties were supported based on the results of factor analysis, fit statistics, and DIF evaluation. All banks measured the concepts precisely (reliability ≥0.90) for more than 98 % of participants. Conclusion: These adult scales and item banks for PWB provide the flexibility, efficiency, and precision necessary to promote future epidemiological, observational, and intervention research on the relationship of PWB with physical and mental health.

KW - Life satisfaction

KW - Meaning

KW - Positive affect

KW - Psychological assessment

KW - Well-being

UR - http://www.scopus.com/inward/record.url?scp=84895062253&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84895062253&partnerID=8YFLogxK

U2 - 10.1007/s11136-013-0452-3

DO - 10.1007/s11136-013-0452-3

M3 - Article

C2 - 23771709

AN - SCOPUS:84895062253

SN - 0962-9343

JO - Quality of Life Research

JF - Quality of Life Research

  • Systematic review
  • Open access
  • Published: 27 July 2011

A systematic review of the psychometric properties of self-report research utilization measures used in healthcare

  • Janet E Squires 1 ,
  • Carole A Estabrooks 2 ,
  • Hannah M O'Rourke 2 ,
  • Petter Gustavsson 3 ,
  • Christine V Newburn-Cook 2 &
  • Lars Wallin 4  

Implementation Science volume  6 , Article number:  83 ( 2011 ) Cite this article

20k Accesses

69 Citations

1 Altmetric

Metrics details

In healthcare, a gap exists between what is known from research and what is practiced. Understanding this gap depends upon our ability to robustly measure research utilization.

The objectives of this systematic review were: to identify self-report measures of research utilization used in healthcare, and to assess the psychometric properties (acceptability, reliability, and validity) of these measures.

We conducted a systematic review of literature reporting use or development of self-report research utilization measures. Our search included: multiple databases, ancestry searches, and a hand search. Acceptability was assessed by examining time to complete the measure and missing data rates. Our approach to reliability and validity assessment followed that outlined in the Standards for Educational and Psychological Testing .

Of 42,770 titles screened, 97 original studies (108 articles) were included in this review. The 97 studies reported on the use or development of 60 unique self-report research utilization measures. Seven of the measures were assessed in more than one study. Study samples consisted of healthcare providers (92 studies) and healthcare decision makers (5 studies). No studies reported data on acceptability of the measures. Reliability was reported in 32 (33%) of the studies, representing 13 of the 60 measures. Internal consistency (Cronbach's Alpha) reliability was reported in 31 studies; values exceeded 0.70 in 29 studies. Test-retest reliability was reported in 3 studies with Pearson's r coefficients > 0.80. No validity information was reported for 12 of the 60 measures. The remaining 48 measures were classified into a three-level validity hierarchy according to the number of validity sources reported in 50% or more of the studies using the measure. Level one measures (n = 6) reported evidence from any three (out of four possible) Standards validity sources (which, in the case of single item measures, was all applicable validity sources). Level two measures (n = 16) had evidence from any two validity sources, and level three measures (n = 26) from only one validity source.

Conclusions

This review reveals significant underdevelopment in the measurement of research utilization. Substantial methodological advances with respect to construct clarity, use of research utilization and related theory, use of measurement theory, and psychometric assessment are required. Also needed are improved reporting practices and the adoption of a more contemporary view of validity ( i.e. , the Standards ) in future research utilization measurement studies.

Peer Review reports

Clinical and health services research produces vast amounts of new research every year. Despite increased access by healthcare providers and decision-makers to this knowledge, uptake into practice is slow [ 1 , 2 ] and has resulted in a 'research-practice gap.'

Measuring research utilization

Recognition of, and a desire to narrow, the research-practice gap, has led to the accumulation of a considerable body of knowledge on research utilization and related terms, such as knowledge translation, knowledge utilization, innovation adoption, innovation diffusion, and research implementation. Despite gains in the understanding of research utilization theoretically [ 3 , 4 ], a large and rapidly expanding literature addressing the individual factors associated with research utilization [ 5 , 6 ], and the implementation of clinical practice guidelines in various health disciplines [ 7 , 8 ], little is known about how to robustly measure research utilization.

We located three theoretical papers explicitly addressing the measurement of knowledge utilization (of which research utilization is a component) [ 9 – 11 ], and one integrative review that examined the psychometric properties of self-report research utilization measures used in professions allied to medicine [ 12 ]. Within each of these papers, a need for conceptual clarity and pluralism in measurement was stressed. Weiss [ 11 ] also argued for specific foci ( i.e ., focus on specific studies, people, issues, or organizations) when measuring knowledge utilization. Shortly thereafter, Dunn [ 9 ], proposed a linear four-step process for measuring knowledge utilization: conceptualization (what is knowledge utilization and how it is defined and classified); methods (given a particular conceptualization, what methods are available to observe knowledge use); measures (what scales are available to measure knowledge use); and reliability and validity. Dunn specifically urged that greater emphasis be placed on step four (reliability and validity). A decade later, Rich [ 10 ] provided a comprehensive overview of issues influencing knowledge utilization across many disciplines. He emphasized the complexity of the measurement process, suggesting that knowledge utilization may not always be tied to a specific action, and that it may exist as more of an omnibus concept.

The only review of research utilization measures to date was conducted in 2003 by Estabrooks et al. [ 12 ]. The review was limited to self-report research utilization measures used in professions allied to medicine and to the specific data on validity that was extracted. That is, only data that was (by the original authors) explicitly interpreted as validity in the study reports was extracted as 'supporting validity evidence'. A total of 43 articles from three online databases (CINAHL, Medline, and Pubmed) comprised the final sample of articles included in the review. Two commonly used multi-item self-report measures (published in 16 papers) were identified--the Nurses Practice Questionnaire and the Research Utilization Questionnaire. An additional 16 published papers were identified that used single-item self-report questions to measure research utilization. Several problems with these research utilization measures were identified: lack of construct clarity of research utilization, lack of use of research utilization theories, lack of use of measurement theory, and finally, lack of standard psychometric assessment.

The four papers [ 9 – 12 ] discussed above point to a persistent and unresolved problem--an inability to robustly measure research utilization. This presents both an important and a practical challenge to researchers and decision-makers who rely on such measures to evaluate the uptake and effectiveness of research findings to improve patient and organizational outcomes. There are multiple reasons why we believe the measurement of research utilization is important. The most important reason relates to designing and evaluating the effectiveness of interventions to improve patient outcomes. Research utilization is commonly assumed to have a positive impact on patient outcomes by assisting with eliminating ineffective and potentially harmful practices, and implementing more effective (research-based) practices. However, we can only determine if patient outcomes are sensitive to varying levels of research utilization if we can first measure research utilization in a reliable and valid manner. If patient outcomes are sensitive to the use of research and we do not measure it, we, in essence, do the field more harm than good by ignoring a 'black box' of causal mechanisms that can influence research utilization. The causal mechanisms within this back box can, and should, be used to inform the design of interventions that aim to improve patient outcomes by increasing research utilization by care providers.

Study purpose and objectives

The study reported in this paper is a systematic review of the psychometric properties of self-report measures of research utilization used in healthcare. Specific objectives of this study were to: identify self-report measures of research utilization used in healthcare ( i.e ., used to measure research utilization by healthcare providers, healthcare decision makers, and in healthcare organizations); and assess the psychometric properties of these measures.

Study selection (inclusion and exclusion) criteria

Studies were included that met the following inclusion criteria: reported on the development or use of a self-report measure of research utilization; and the study population comprised one or more of the following groups--healthcare providers, healthcare decision makers, or healthcare organizations. We defined research utilization as the use of research-based (empirically derived) information. This information could be reported in a primary research article, review/synthesis report, or a protocol. Where the study involved the use of a protocol, we required the research-basis for the protocol to be apparent in the article. We excluded articles that reported on adherence to clinical practice guidelines, the rationale being that clinical practice guidelines can be based on non-research evidence ( e.g ., expert opinion). We also excluded articles reporting on the use of one specific-research-based practice if the overall purpose of the study was not to examine research utilization.

Search strategy for identification of studies

We searched 12 bibliographic databases; details of the search strategy are located in Additional File 1 . We also hand searched the journal Implementation Science (a specialized journal in the research utilization field) and assessed the reference lists of all retrieved articles. The final set of included articles was restricted to those published in the English, Danish, Swedish, and Norwegian languages (the official languages of the research team). There were no restrictions based on when the study was undertaken or publication status.

Selection of Studies

Two team members (JES and HMO) independently screened all titles and abstracts (n = 42,770). Full text copies were retrieved for 501 titles, which represented all titles identified as having potential relevance to our objectives or where there was insufficient information to make a decision as to relevance. A total of 108 articles (representing 97 original studies) comprised the final sample. Disagreements were resolved by consensus. When consensus could not be reached, a third senior member of the review team (CAE, LW) acted as an arbitrator and made the final decision (n = 9 articles). Figure 1 summarizes the results of the screening/selection process. A list of retrieved articles that were excluded can be found in Additional File 2 .

figure 1

Article screening and selection .

Data Extraction

Two reviewers (JES and HMO) performed data extraction: one reviewer extracted the data, which was then checked for accuracy by a second reviewer. We extracted data on: year of publication, study design, setting, sampling, subject characteristics, methods, the measure of research utilization used, substantive theory, measurement theory, responsiveness (the extent to which the measure can assess change over time), reliability (information on variances and standard deviations of measurement errors, item response theory test information functions, and reliability coefficients where extracted where it existed), reported statements of traditional validity (content validity, criterion validity, construct validity), and study findings reflective of the four sources of validity evidence (content, response processes, internal structure, and relations to other variables) outlined in the Standards for Educational and Psychological Testing (the Standards ) [ 13 ]. Content evidence refers to the extent to which the items in a self-report measure adequately represent the content domain of the concept or construct of interest. Response processes evidence refers to how respondents interpret, process, and elaborate upon item content and whether this behaviour is in accordance with the concept or construct being measured. Internal structure evidence examines the relationships between the items on a self-report measure to evaluate its dimensionality. Relations to other variables evidence provide the fourth source of validity evidence. External variables may include measures of criteria that the concept or construct of interest is expected to predict, as well as relationships to other scales hypothesized to measure the same concepts or constructs, and variables measuring related or different concepts or constructs [ 13 ]. In the Standards , validity is a unitary construct in which multiple evidence sources contribute to construct validity. A higher number of validity sources indicate stronger construct validity. An overview of the Standards approach to reliability and validity assessment is in Additional File 3 . All disagreements in data extraction were resolved by consensus.

There are no universal criteria to grade the quality of self-report measures. Therefore, in line with other recent measurement reviews [ 14 , 15 ], we did not use restrictive criteria to rate the quality of each study. Instead, we focused on performing a comprehensive assessment of the psychometric properties of the scores obtained using the research utilization measures reported in each study. In performing this assessment, we adhered to the Standards , considered best practice in the field of psychometrics [ 16 ]. Accordingly, we extracted data on all study results that could be grouped according to the Standards' critical reliability information and four validity evidence sources. To assess relations to other variables, we a priori (based on commonly used research utilization theories and systematic reviews) identified established relationships between research utilization and other (external) variables (See Additional File 3 ). The external variables included: individual characteristics ( e.g ., attitude towards research use), contextual characteristics ( e.g ., role), organizational characteristics ( e.g ., hospital size), and interventions ( e.g ., use of reminders). All relationships between research use and external variables in the final set of included articles were then interpreted as supporting or refuting validity evidence. The relationship was coded as 'supporting validity evidence' if it was in the same direction and had the significance predicted, and as 'refuting validity evidence' if it was in the opposite direction or did not have the significance predicted.

Data Synthesis

The findings from the review are presented in narrative form. To synthesize the large volume of data extracted on validity, we developed a three-level hierarchy of self-report research utilization measures based on the number of validity sources reported in 50% or more of the studies for each measure. In the Standards , no one source of validity evidence is considered always superior to the other sources. Therefore, in our hierarchy, level one, two, and three measures provided evidence from any three, two, and one validity sources respectively. In the case of single-item measures, only three validity sources are applicable; internal structure validity evidence is not applicable as it assesses relationships between items. Therefore, a single-item measure within level one has evidence from all applicable validity sources.

Objective 1: Identification and characteristics of self-report research utilization measures used in healthcare

In total, 60 unique self-report research utilization measures were identified. We grouped them into 10 classes as follows:

Nurses Practice Questionnaire (n = 1 Measure)

Research Utilization Survey (n = 1 Measure)

Edmonton Research Orientation Survey (n = 1 Measure)

Knott and Wildavsky Standards (n = 1 Measure)

Other Specific Practices Indices (n = 4 Measures) (See Additional File 4 )

Other General Research Utilization Indices (n = 10 Measures) (See Additional File 4 )

Past, Present, Future Use (n = 1 Measure)

Parahoo's Measure (n = 1 Measure)

Estabrooks' Kinds of Research Utilization (n = 1 Measure)

Other Single-Item Measures (n = 39 Measures)

Table 1 provides a description of each class of measures. Classes one through six contain multiple-item measures, while classes seven through ten contain single-item measures; similar proportions of articles reported multi- and single-item measures (n = 51 and n = 59 respectively, two articles reported both multi- and single-item measures). Only seven measures were assessed in multiple studies: Nurses Practice Questionnaire; Research Utilization Survey; Edmonton Research Orientation Survey; a Specific Practice Index [ 17 , 18 ]; Past, Present, Future Use; Parahoo's Measure; and Estabrooks' Kinds of Research Utilization. All study reports claimed to measure research utilization; however, 13 of the 60 measures identified were proxy measures of research utilization. That is, they measure variables related to using research ( e.g ., reading research articles) but not research utilization directly. The 13 proxy measures are: Nurses Practice Questionnaire, Research Utilization Questionnaire, Edmonton Research Orientation Survey, and the ten Other General Research Utilization Indices.

The majority (n = 54) of measures were assessed with healthcare providers. Professional nurses comprised the sample in 56 studies (58%), followed by allied healthcare professionals (n = 25 studies, 26%), physicians (n = 7 studies, 7%), and multiple clinical staff groups (n = 5 studies, 5%). A small proportion of studies (n = 5 studies, 5%) measured research utilization by healthcare decision makers. The decision makers, in each study, were members of senior management with direct responsibility for making decisions for a healthcare organization and included: medical officers and program directors [ 19 ]; managers in ministries and regional health authorities [ 20 ]; senior administrators [ 21 ]; hospital managers [ 22 ]; and executive directors [ 23 ]. A different self-report measure was used in each of these six studies. The unit/organization was the unit of analysis in 6 of the 97 (6%) included studies [ 22 – 27 ]; a unit-level score for research utilization was calculated by aggregating the mean scores of individual care providers.

Most studies were conducted in North America (United States: n = 43, 44% and Canada: n = 22, 23%), followed by Europe (n = 22, 23%). Other geographic areas represented included: Australia (n = 5, 5%), Iran (n = 1, 1%), Africa (n = 2, 2%), and Taiwan (n = 2, 2%). With respect to date of publication, the first report included in this review was published in 1976 [ 28 ]. The majority of reports (n = 90, 83%) were published within the last 13 years (See Figure 2 ).

figure 2

Publication timeline .

Objective 2: Psychometric assessment of the self-report research utilization measures

Our psychometric assessment involved three components: acceptability, reliability, and validity.

Acceptability

Acceptability in terms of time required to complete the research utilization measures and missing data (specific to the research utilization items) was not reported.

Reliability

Reliability was reported in 32 (33%) of the studies (See Table 2 and Additional File 5 ). Internal consistency (Cronbach's Alpha) was the most commonly reported reliability statistic--it was reported for 13 of the 18 multi-item measures (n = 65, 67% of studies). Where reliability (Cronbach's Alpha) was reported, it almost always (n = 29 of 31 studies, 94%) exceeded the accepted standard (> 0.70) for scales intended to compare groups, as recommended by Nunnally and Bernstein [ 29 ]. The two exceptions were assessments of the Nurses Practice Questionnaire [ 30 – 32 ]. This tendency to only report reliability coefficients that exceed the accepted standard may potentially reflect a reporting bias.

Stability, or test-retest, reliability was reported for only three (3%) of the studies: two studies assessing the Nurses Practice Questionnaire [ 33 – 35 ], and one study assessing Stiefel's Research Use Index [ 36 ]. All three studies reported Pearson r coefficients greater than 0.80 using one-week intervals (Table 2 ). One study also assessed inter-rater reliability. Pain et al. [ 37 ] had trained research staff and study respondents rate their (respondents) use of research on a 7-point scale. Inter-rater reliability among the interviewers was acceptable with pair wise correlations ranging from 0.80 to 0.91 (Table 2 ). No studies reported other critical reliability information consistent with the Standards , such as variances or standard deviations of measurement errors, item response theory test information functions, or parallel forms coefficients.

No single research utilization measure had supporting validity evidence from all four evidence sources outlined in the Standards . For 12 measures [ 38 – 49 ], each in the 'other single-item' class, there were no reported findings that could be classified as validity evidence. The remaining 48 measures were classified as level one (n = 6), level two (n = 16), or level three (n = 26) measures, according to whether the average number of validity sources reported in 50% or more of the studies describing an assessment of the measure was three, two, or one, respectively. Level one measures displayed the highest number of validity sources and thus, the strongest construct validity. A summary of the hierarchy is presented in Tables 3 , 4 , and 5 . More detailed validity data is located in Additional File 6 .

Measures reporting three sources of validity evidence (level one)

Six measures were grouped as level one: Specific Practices Indices (n = 1), General Research Utilization Indices (n = 3), and Other-Single Items (n = 2) (Table 3 ). Each measure was assessed in a single study. Five [ 24 , 50 – 52 ] of the six measures displayed content, response processes, and relations to other variables validity evidence, while the assessment of one measure [ 36 ] provided internal structure validity evidence. A detailed summary of the level one measures is located in Table 6 .

Measures reporting two sources of validity evidence (level two)

Sixteen measures were grouped as level two: Nurses Practice Questionnaire (n = 1); Knott and Wildvasky Standards (n = 1); General Research Utilization Indices (n = 4); Specific Practices Indices (n = 2); Estabrooks' Kinds of Research Utilization (n = 1); Past, Present, Future Use (n = 1); and Other Single-Items (n = 6) (Table 4 ). Most assessments occurred with nurses in hospitals. No single validity source was reported for all level two measures. For the 16 measures in level two, the most commonly reported evidence source was relations to other variables (reported for 12 [75%] of the measures), followed by response processes (n = 7 [44%] of the measures), content (n = 6 [38%] of the measures), and lastly, internal structure (n = 1 [6%] of the measures). Four of the measures were assessed in multiple studies: Nurses Practice Questionnaire, a Specific Practices Index [ 17 , 18 ], Parahoo's Measure, and Estabrooks' Kinds of Research Utilization.

Measures reporting one source of validity evidence (level three)

The majority (n = 26) of research utilization measures identified fell into level three: Champion and Leach's Research Utilization Survey (n = 1); Edmonton Research Orientation Survey (n = 1); General Research Utilization Indices (n = 3); Specific Practices Indices (n = 1); Past, Present, Future Use (n = 1); and, Other Single-Item Measures (n = 19) (Table 5 ). The majority of level three measures are single-items (n = 20) and have been assessed in a single study (n = 23). Similar to level two, there was no single source of validity evidence common across all of the level three measures. The most commonly reported validity source was content (reported for 12 [46%] of the measures), followed by response processes (n = 10, 38%), relations to other variables (n = 10, 38%), and lastly, internal structure evidence (n = 1, 4%). Three level three measures were assessed in multiple studies: the Research Utilization Questionnaire; Past, Present, Future Use items; and the Edmonton Research Orientation Survey.

Additional properties

As part of our validity assessment, we paid special attention to how each measure 'functioned'. That is, 'were the measures behaving as they should' . All six level one measures and the majority of level two measures (n = 12 of 16) displayed 'relations to other (external) variables' evidence, indicating that the measures are functioning as the literature hypothesizes a research utilization measure should function. Fewer measures in level three (n = 10 of 26) displayed optimal functioning (Table 5 and Additional File 5 ). We also looked for evidence of responsiveness of the measures (the extent to which the measure captures change over time); no evidence was reported.

Our discussion is organized around three areas: the state of the science of research utilization measurement, construct validity, and our proposed hierarchy of measures.

State of the science

In 2003, Estabrooks et al. [ 12 ] completed a review of self-report research utilization measures. By significantly extending the search criteria of that review, we identified 42 additional self-report research utilization measures, a substantial increase in the number of measures available. While, on the surface, this gives the impression of an optimistic picture of research utilization measurement, detailed inspection of the 108 articles included in our review revealed several limitations to these measures. These limitations seriously constrain our ability to validly measure research utilization. The limitations center on ambiguity between different measures and between studies using the same measure, and methodological problems with the design and evaluation of the measures.

Ambiguity in self-report research utilization measures

There is ambiguity with respect to the naming of self-report research utilization measures. For example, similar measures have different names. Parahoo's Measure [ 53 ] and Pettengil's single item [ 54 ], for example, both ask participants one question--whether they have used research findings in their practice in the past two years or three years, respectively. Conversely, other measures that ask substantially different questions are similarly named; for example, Champion and Leach [ 55 ], Linde [ 56 ], and Tsai [ 57 , 58 ] all describe a Research Utilization Questionnaire. Further ambiguity was seen in the articles that described the modification of a pre-existing research utilization measure. In most cases, despite making significant modifications to the measure, the authors retained the original measure's name and, thus, masked the need for additional validity testing. The Nurses Practice Questionnaire is an example of this. Brett [ 33 ] originally developed the Nurses Practice Questionnaire, which consisted of 14 research-based practices, to assess research utilization by hospital nurses. The Nurses Practice Questionnaire was subsequently modified (the number of and actual practices assessed, as well as the items that follow each of the practices) and used in eight additional studies [ 30 – 32 , 35 , 59 – 63 ], but each study retained the Nurses Practice Questionnaire name.

Methodological problems

In the earlier research utilization measurement review, Estabrooks et al. [ 12 ] identified four core methodological problems, lack of: construct clarity, use of research utilization theory, use of measurement theory, and psychometric assessment. In our review, we found that, despite an additional 10 years of research, 42 new measures and 65 new reports of self-report research utilization measures, these problems and others persist.

Lack of construct clarity

Research utilization has been, and is likely to remain, a complex and contested construct. Issues around clarity of research utilization measurement stems from four areas: a lack of definitional precision of research utilization, confusion around the formal structure of research utilization, lack of substantive theory to develop and evaluate research utilization measures, and confusion between factors associated with research utilization and the use of research per se .

Lack of definitional precision with respect to research utilization is well documented. In 1991, knowledge utilization scholar Thomas Backer [ 64 ] declared lack of definitional precision as part of a serious challenge of fragmentation that was facing researchers in the knowledge (utilization) field. Since then, there have been substantial efforts to understand what does and does not make research utilization happen. However, the issue of definitional precision continues to be largely ignored. In our review, definitions of research utilization were infrequently reported in the articles (n = 36 studies, 37%) [ 3 , 20 , 23 , 30 , 32 , 36 , 37 , 40 , 51 , 53 , 57 , 63 , 65 – 90 ] and even less frequently incorporated into the administered measures (n = 8 studies, 8%) [ 3 , 67 – 70 , 74 , 80 , 86 , 88 ]. Where definitions of research utilization were offered, they varied significantly between studies (even studies of the same measure) with one exception: Estabrooks' Kinds of Research Utilization. In this latter measure, the definitions offered were consistent in both the study reports and the administered measure.

A second reason for the lack of clarity in research utilization measurement is confusion around the formal structure of research utilization. The literature is characterized by multiple conceptualizations of research utilization. These conceptualizations influence how we define research utilization and, consequently, how we measure the construct and interpret the scores obtained from such measurement. Two prevailing conceptualizations dominating the field are research utilization as process ( i.e ., consists of a series of stages/steps) and research utilization as variable or discrete event (a 'variance' approach). Despite debate in the literature with respect to these two conceptualizations, this review revealed that the vast majority of measures that quantify research utilization do so using a 'variable' approach. Only two measures were identified that assess research utilization using a 'process' conceptualization: Nurses Practice Questionnaire [ 33 ] (which is based on Rogers' Innovation Decision Process Theory [ 91 , 92 ]) and Knott and Wildavsky's Standards measure (developed by Belkhodja et al. and based on Knott and Wildavsky's Standards of Research Use model [ 93 ]). Some scholars also prescribe research utilization as typological in addition to being a variable or a process. For example, Stetler [ 88 ] and Estabrooks [ 3 , 26 , 66 – 70 , 74 , 80 , 86 ] both have single items that measure multiple kinds of research utilization, with each kind individually conceptualized as a variable. Grimshaw et al. [ 8 ], in a systematic review of guideline dissemination and implementation strategies, reported a similar finding with respect to limited construct clarity in the measurement of guideline adherence in healthcare professionals. Measurement of intervention uptake, they argued, is problematic because measures are mostly around the 'process' of uptake rather than on the 'outcomes' of uptake. While both reviews point to lack of construct clarity with respect to process versus variable/outcome measures, they report converse findings with respect to the dominant conceptualization in existing measures. This finding suggests a comprehensive review targeting the psychometric properties of self-report measures used in guideline adherence is also needed. While each conceptualization (process, variable, typological) of research utilization is valid, there is, to date, no consensus regarding which one is best or the most valid.

A third reason for the lack of clarity in research utilization measurement is limited use of substantive theory in the development of research utilization measures. There are numerous theories, frameworks, and models of research utilization and of related constructs, from the fields of nursing ( e.g ., [ 94 – 96 ]), organizational behaviour ( e.g ., [ 97 – 99 ]), and the social sciences ( e.g ., [ 100 ]). However, only 1 of the 60 measures identified in this review explicitly reported using research utilization theory in its development. The Nurses Practice Questionnaire [ 33 ] was developed based of Rogers' Innovation-Decision Process theory (one component of Rogers' larger Diffusion of Innovations theory [ 91 ]). The Innovation-Decision Process theory describes five stages to the adoption of an innovation (research): awareness, persuasion, decision, implementation, and confirmation. A similar finding regarding limited use of substantive theory was also reported by Grimshaw et al. [ 8 ] in their review of guideline dissemination and implementation strategies. This limited use of theory in the development and testing of self-report measures may therefore reflect the more general state of the science in the research utilization and related ( e.g ., knowledge translation) fields that requires addressing.

A fourth and final reason that we identified for the lack of clarity in research utilization measurement is confusion between factors associated with research utilization and the use of research per se . The Nurses Practice Questionnaire [ 33 ] and all 10 Other General Research Utilization Indices ([ 24 , 36 , 50 , 73 , 84 , 101 – 105 ]) claim to directly measure research utilization. However, their items, which while compatible with a process view of research utilization, do not directly measure research utilization. For example, 'reading research' is an individual factor that fits into the awareness stage of Rogers' Innovation Decision-Process theory. The Nurses Practice Questionnaire uses this item to create an overall 'adoption' score, which is interpreted as 'research use' , but it is not 'use' . A majority of the General Research Utilization Indices also includes reading research as an item. In these measures, such individual factors are treated as proxies for research utilization. We caution researchers that while many individual factors like 'reading research' may be a desirable quality for making research utilization happen, they are not research utilization. Therefore, when selecting a research utilization measure to use, the aim of the investigation is paramount; if the aim is to examine research utilization as an event, then measures that incorporate proxies should be avoided.

Lack of measurement theory

Foundational to the development of any measure is measurement theory. The two most commonly used measurement theories are classical test score theory, and modern measurement (or item response) theory. Classical test score theory proposes that an individual's observed score on a construct is the additive composite of their true score and random error. This theory forms the basis for traditional reliability theory (Cronbach's Alpha) [ 106 , 107 ]. Item response theory is a model-based theory that relates the probability of an individual's response to an item on an underlying trait. It proposes that as an individual's level of a trait (research utilization) increases, the probability of a correct (or in the case of research utilization, a more positive) response also increases [ 108 , 109 ].

Similar to the previous review by Estabrooks et al. [ 12 ], none of the reports in our review explicitly stated that consideration of any kind was given to measurement theory in either the development or assessment of the respective measures. However, in our review, for 14 (23%) of the measures, there was reliability evidence consistent with the adoption of a classical test score theory approach. For example: Cronbach's alpha coefficients were reported on 13 (22%) measures (Table 2 ) and principal components (factor) analysis and item total correlations were reported on 2 (3%) measures (Tables 3 and 4 ).

Lack of psychometric assessment

In the previous review, Estabrooks et al. [ 12 ] concluded, 'All of the current studies lack significant psychometric assessment of used instruments.' They further stated that over half of the studies in their review did not mention validity, and that only two measures displayed construct validity. This latter finding, we argue, may be attributed to the adoption of a traditional conceptualization of validity where only evidence labeled as validity by the original study authors were considered. In our review, a more positive picture was displayed, with only 12 (20%) of the self-report research utilization measures identified showing no evidence of construct validity. We attribute this, in part, to our implementation of the Standards as a framework for validity. Using this framework, we scrutinized all results (not just those labeled as validity), in terms of whether or not they added to overall construct validity.

Additional limitations to the field

Several additional limitations in research utilization measurement were also noted as a result of this review. They include: limited reporting of data reflective of reliability beyond standard internal consistency (Cronbach's Alpha) coefficients; limited reporting of study findings reflective of validity; limited assessments of the same measure in multiple (> 1) studies; lack of assessment of acceptability and responsiveness; overreliance on the assessment made in the index (original) study of a measure; and failure to re-establish validity when modifications are made and/or the measure is assessed in a new population or context.

Construct validity (the standards)

Traditionally, validity has been conceptualized according to three distinct types: content, criterion, and construct. While this way of thinking about validity has been useful, it has also caused problems. For example, it has led to compartmentalized thinking about validity, making it 'easier' to overlook the fact that construct validity is really the whole of validity theory. It has also led to the incorrect view of validity as a property of a measure rather than of the scores (and resulting interpretations) obtained with the measure. A more contemporary conceptualization of validity (seen in the Standards ) was taken in this review. Using this approach, validity was conceptualized as a unitary concept with multiple sources of evidence, each contributing to overall (construct) validity [ 13 ]. We believe this conceptualization is both more relevant and more applicable to the study of research utilization than is the traditional conceptualization that dominates the literature [ 16 , 110 ].

All self-report measures require validity assessments. Without such assessments little to no intrinsic value can be placed on findings obtained with the measure. Validity is associated with the interpretations assigned to the scores obtained using a measure, and thus is intended to be hypothesis-based [ 110 , 111 ]. Hence, to establish validity, desired score interpretations are first hypothesized to allow for the deliberate collection of data to support or refute the hypotheses [ 112 ]. In line with this thinking, data collected using a research utilization self-report measure will always be more or less valid depending on the purpose of the assessment, the population and setting, and timing of the assessment ( e.g ., before or after an intervention). As a result, we are not able to declare any of the measures we identified in our review as valid or invalid, but only as more or less valid for selected populations, settings, and situations. This deviates substantially from traditional thinking, which suggests that validity either exists or not.

According to Cronbach and Meehl [ 113 ], construct validity rests in a nomological network that generates testable propositions that relate scores obtained with self-report measures (as representations of a construct) to other constructs, in order to better understand the nature of the construct being measured [ 113 ]. This view is comparable to the traditional conceptualization of construct validity as existing or not, and is also in line with the views of philosophers of science from the first half of the 20th century ( e.g ., Duhem [ 114 ] and Lakatos [ 115 ]). Duhem and Lakatos both contended that any theory could be fully justified or falsified based on empirical evidence ( i.e ., based on data collected with an specific measure). From this perspective, construct validity exists or not. In the second half of the 20th century, however, movement away from justification to what was described by Feyerabend [ 116 ] and Kuhn [ 117 ] as 'nonjustificationism' occurred. In nonjustificationism, a theory is never fully justified or falsified. Instead, at any given time, it is a closer or further approximation of the truth than another (competing) theory. From this perspective, construct validity is a matter of degree ( i.e ., more or less valid) and can change with the sample, setting, and situation being assessed. This is in line with a more contemporary (the Standards ) conceptualization of validity.

Self-report research utilization measure hierarchy

The Standards [ 13 ] provided us with a framework to create a hierarchy of research utilization measures and thus, synthesize a large volume of psychometric data. In an attempt to display the overall extent of construct validity of the measures identified, our hierarchy (consistent with the Standards ) placed equal weight on all four evidential sources. While we were able to categorize 48 of the 60 self-report research utilization measures identified into the hierarchy, several cautions exist with respect to use of the hierarchy. First, the levels in the hierarchy are based on the number of validity sources reported, and not on the actual source or quality of evidence within each source. Second, some measures in our hierarchy may appear to have strong validity only because they have been subjected to limited testing. For example, the six measures in level one have only been tested in a single study. Third, the hierarchy included all 48 measures that displayed any validity evidence. Some of these measures, however, are proxies of research utilization. Overall, the hierarchy is intended to present an overview of validity testing to date on the research utilization measures identified. It is meant to inform researchers regarding what testing has been done and, importantly, where additional testing is needed.

Limitations

Although rigorous and comprehensive methods were used for this review, there are three study limitations. First, while we reviewed dissertation databases, we did not search all grey literature sources. Second, due to limited reporting of findings consistent with the four sources of validity evidence in the Standards , we may have concluded lower levels of validity for some measures than actually exist. In the latter case, our findings may reflect poor reporting rather than less validity. Third, our decision to exclude articles that reported on healthcare providers' adherence to clinical practice guidelines may be responsible for the limited number of articles sampling physicians included in the review. A systematic review conducted by Grimshaw et al. [ 8 ] on clinical practice guidelines reported physicians alone were the target of 174 (74%) of the 235 studies included in that review. A future review examining the psychometric properties of self-report measures used to quantify guideline adherence would therefore be a fruitful avenue of inquiry.

In this review, we identified 60 unique self-report research utilization measures used in healthcare. While this appears to be a large and definite set of measures, our assessment paints a rather discouraging picture of research utilization measurement. Several of the measures, while labeled research utilization measures, did not assess research utilization per se . Substantial methodological advances in the research utilization field, focusing in the area of measurement (in particular with respect to construct clarity, use of measurement theory, and psychometric assessment) are urgently needed. These advances are foundational to ensuring the availability of defensible self-report measures of research utilization. Also needed are improved reporting practices and the adoption of a more contemporary view of validity (the Standards ) in future research utilization measurement studies.

Haines A, Jones R: Implementing findings of research. BMJ. 1994, 308 (6942): 1488-1492.

CAS   PubMed   PubMed Central   Google Scholar  

Glaser EM, Abelson HH, Garrison KN: Putting Knowledge to Use: Facilitating the Diffusion of Knowledge and the Implementation of Planned Change. 1983, San Francisco: Jossey-Bass

Google Scholar  

Estabrooks CA: The conceptual structure of research utilization. Research in Nursing and Health. 1999, 22 (3): 203-216. 10.1002/(SICI)1098-240X(199906)22:3<203::AID-NUR3>3.0.CO;2-9.

CAS   PubMed   Google Scholar  

Stetler C: Research utilization: Defining the concept. Image:The Journal of Nursing Scholarship. 1985, 17: 40-44. 10.1111/j.1547-5069.1985.tb01415.x.

Godin G, Belanger-Gravel A, Eccles M, Grimshaw G: Healthcare professionals' intentions and behaviours: A systematic review of studies based on social cognitive theories. Implementation Science. 2008, 3 (36):

Squires J, Estabrooks C, Gustavsson P, Wallin L: Individual determinants of research utilization by nurses: A systematic review update. Implementation Science. 2011, 6 (1):

Grimshaw JM, Eccles MP, Walker AE, Thomas RE: Changing physicians' behavior: What works and thoughts on getting more things to work. Journal of Continuing Education in the Health Professions. 2002, 22 (4): 237-243. 10.1002/chp.1340220408.

PubMed   Google Scholar  

Grimshaw JM, Thomas RE, MacIennan G, Fraser CR, Vale L, Whity P, Eccles MP, Matowe L, Shirran L, Wensing M: Effectiveness and efficiency of guideline dissemination and implementation strategies. Health Technology Assessment. 2004, 8 (6): 1-72.

Dunn WN: Measuring knowledge use. Knowledge: Creation, Diffusion, Utilization. 1983, 5 (1): 120-133.

Rich RF: Measuring knowledge utilization processes and outcomes. Knowledge and Policy: International Journal of Knowledge Transfer and Utilization. 1997, 3: 11-24.

Weiss CH: Measuring the use of evaluation. Utilizing evaluation: Concepts and measurement techniques. Edited by: Ciarlo JA. 1981, Beverly Hills, CA: Sage, 17-33.

Estabrooks C, Wallin L, Milner M: Measuring knowledge utilization in health care. International Journal of Policy Analysis & Evaluation. 2003, 1: 3-36.

American Educational Research Association, American Psychological Association, National Council on Measurement in Education: Standards for Educational and Psychological Testing. 1999, Washington, D.C.: American Educational Research Association

Shaneyfelt T, Baum K, Bell D, Feldstein D, Houston T, Kaatz S, Whelan C, Green M: Instruments for Evaluating Education in Evidence-Based Practice. JAMA. 2006, 296: 1116-1127. 10.1001/jama.296.9.1116.

Kirkova J, Davis M, Walsh D, Tiernan E, O'Leary N, LeGrand S, Lagman R, Mitchell-Russell K: Cancer symptom assessment instruments: A systematic review. Journal of Clinical Oncology. 2006, 24 (9): 1459-1473. 10.1200/JCO.2005.02.8332.

Streiner D, Norman G: Health Measurement Scales: A practical Guide to their Development and Use. 2008, Oxford: Oxford University Press, 4

Tita ATN, Selwyn BJ, Waller DK, Kapadia AS, Dongmo S: Evidence-based reproductive health care in Cameroon: population-based study of awareness, use and barriers. Bulletin of the World Health Organization. 2005, 83 (12): 895-903.

Tita AT, Selwyn BJ, Waller DK, Kapadia AS, Dongmo S, Tita ATN: Factors associated with the awareness and practice of evidence-based obstetric care in an African setting. BJOG: An International Journal of Obstetrics & Gynaecology. 2006, 113 (9): 1060-1066. 10.1111/j.1471-0528.2006.01042.x.

CAS   Google Scholar  

Dobbins M, Cockerill R, Barnsley J: Factors affecting the utilization of systematic reviews. A study of public health decision makers. International Journal of Technology Assessment in Health Care. 2001, 17 (2): 203-214. 10.1017/S0266462300105069.

Belkhodja O, Amara N, Landry Rj, Ouimet M: The extent and organizational determinants of research utilization in canadian health services organizations. Science Communication. 2007, 28: 377-417. 10.1177/1075547006298486.

Knudsen HK, Roman PM: Modeling the use of innovations in private treatment organizations: The role of absorptive capacity. Journal of Substance Abuse Treatment. 2004, 26 (1): 353-361.

Meehan SMS: An exploratory study of research management programs: Enhancing use of health services research results in health care organizations. (Volumes I and II). Thesis. 1988, The George Washington University

Barwick MA, Boydell KM, Stasiulis E, Ferguson HB, Blase K, Fixsen D: Research utilization among children's mental health providers. Implementation Science. 2008, 3: 19-19. 10.1186/1748-5908-3-19.

PubMed   PubMed Central   Google Scholar  

Reynolds MIA: An Investigation of Organizational Factors Affecting Research Utilization in Nursing Organizations. Thesis. 1981, University of Michigan

Molassiotis A: Nursing research within bone marrow transplantation in Europe: An evaluation. European Journal of Cancer Care. 1997, 6 (4): 257-261. 10.1046/j.1365-2354.1997.00034.x.

Estabrooks CA, Scott S, Squires JE, Stevens B, O'Brien-Pallas L, Watt-Watson J, Profetto-McGrath J, McGilton K, Golden-Biddle K, Lander J: Patterns of research utilization on patient care units. Implementation Science. 2008, 3: 31-10.1186/1748-5908-3-31.

Pepler CJ, Edgar L, Frisch S, Rennick J, Swidzinski M, White C, Brown TG, Gross J: Unit culture and research-based nursing practice in acute care. Canadian Journal of Nursing Research. 2005, 37 (3): 66-85.

Kirk SA, Osmalov MJ: Social workers involvement in research. Clinical social work: research and practice. Edited by: Russell MN. 1976, Newbury Park, Calif.: Sage Publications, 121-124.

Nunnally J, Bernstein I: Psychometric Theory. 1994, New York: McGraw-Hill, 3

Rodgers SE: A study of the utilization of research in practice and the influence of education. Nurse Education Today. 2000, 20 (4): 279-287. 10.1054/nedt.1999.0395.

Rodgers SE: The extent of nursing research utilization in general medical and surgical wards. Journal of Advanced Nursing. 2000, 32 (1): 182-193. 10.1046/j.1365-2648.2000.01416.x.

Berggren A: Swedish midwives' awareness of, attitudes to and use of selected research findings. Journal of Advanced Nursing. 1996, 23 (3): 462-470. 10.1111/j.1365-2648.1996.tb00007.x.

Brett JL: Use of nursing practice research findings. Nursing Research. 1987, 36 (6): 344-349.

Brett JL: Organizational integrative mechanisms and adoption of innovations by nurses. Nursing Research. 1989, 38 (2): 105-110.

Thompson CJ: Extent and factors influencing research utilization among critical care nurses. Thesis. 1997, Texas Woman's University, College of Nursing

Stiefel KA: Career commitment, nursing unit culture, and nursing research utilization. Thesis. 1996, University of South Carolina

Pain K, Magill-Evans J, Darrah J, Hagler P, Warren S: Effects of profession and facility type on research utilization by rehabilitation professionals. Journal of Allied Health. 2004, 33 (1): 3-9.

Dysart AM, Tomlin GS: Factors related to evidence-based practice among U.S. occupational therapy clinicians. American Journal of Occupational Therapy. 2002, 56: 275-284. 10.5014/ajot.56.3.275.

Ersser SJ, Plauntz L, Sibley A, Ersser SJ, Plauntz L, Sibley A: Research activity and evidence-based practice within DNA: a survey. Dermatology Nursing. 2008, 20 (3): 189-194.

Heathfield ADM: Research utilization in hand therapy practice using a World Wide Web survey design. Thesis. 2000, Grand Valley State University

Kelly KA: Translating research into practice: The physicians' perspective. Thesis. 2008, State University of New York at Albany

Mukohara K, Schwartz MD: Electronic delivery of research summaries for academic generalist doctors: A randomised trial of an educational intervention. Medical Education. 2005, 39 (4): 402-409. 10.1111/j.1365-2929.2005.02109.x.

Niederhauser VP, Kohr L: Research endeavors among pediatric nurse practitioners (REAP) study. Journal of Pediatric Health Care. 2005, 19 (2): 80-89.

Olympia RP, Khine H, Avner JR: The use of evidence-based medicine in the management of acutely ill children. Pediatric Emergency Care. 2005, 21 (8): 518-522. 10.1097/01.pec.0000175451.38663.d3.

Scott I, Heyworth R, Fairweather P: The use of evidence-based medicine in the practice of consultant physicians: Results of a questionnaire survey. Australian and New Zealand Journal of Medicine. 2000, 30 (3): 319-326. 10.1111/j.1445-5994.2000.tb00832.x.

Upton D: Clinical effectiveness and EBP 3: application by health-care professionals. British Journal of Therapy & Rehabilitation. 1999, 6 (2): 86-90.

Veeramah V: The use of research findings in nursing practice. Nursing Times. 2007, 103 (1): 32-33.

Walczak JR, McGuire DB, Haisfield ME, Beezley A: A survey of research-related activities and perceived barriers to research utilization among professional oncology nurses. Oncology Nursing Forum. 1994, 21 (4): 710-715.

Bjorkenheim J: Knowledge and social work in health care - the case of Finland. Social Work in Health Care. 2007, 44 (3): 261-10.1300/J010v44n03_09.

Varcoe C, Hilton A: Factors affecting acute-care nurses' use of research findings. Canadian Journal of Nursing Research. 1995, 27 (4): 51-71.

Dobbins M, Cockerill R, Barnsley J: Factors affecting the utilization of systematic reviews: A study of public health decision makers. International Journal of Technology Assessment in Health Care. 2001, 17 (2): 203-214. 10.1017/S0266462300105069.

Suter E, Vanderheyden LC, Trojan LS, Verhoef MJ, Armitage GD: How important is research-based practice to chiropractors and massage therapists?. Journal of Manipulative and Physiological Therapeutics. 2007, 30 (2): 109-115. 10.1016/j.jmpt.2006.12.013.

Parahoo K: Research utilization and research related activities of nurses in Northern Ireland. International Journal of Nursing Studies. 1998, 35 (5): 283-291. 10.1016/S0020-7489(98)00041-8.

Pettengill MM, Gillies DA, Clark CC: Factors encouraging and discouraging the use of nursing research findings. Image--the Journal of Nursing Scholarship. 1994, 26 (2): 143-147. 10.1111/j.1547-5069.1994.tb00934.x.

Champion VL, Leach A: Variables related to research utilization in nursing: an empirical investigation. Journal of Advanced Nursing. 1989, 14 (9): 705-710. 10.1111/j.1365-2648.1989.tb01634.x.

Linde BJ: The effectiveness of three interventions to increase research utilization among practicing nurses. Thesis. 1989, The University of Michigan

Tsai S: Nurses' participation and utilization of research in the Republic of China. International Journal of Nursing Studies. 2000, 37 (5): 435-444. 10.1016/S0020-7489(00)00023-7.

Tsai S: The effects of a research utilization in-service program on nurses. International Journal of Nursing Studies. 2003, 40 (2): 105-113. 10.1016/S0020-7489(02)00036-6.

Barta KM: Information-seeking, research utilization, and barriers to research utilization of pediatric nurse educators. Journal of Professional Nursing: Official Journal of the American Association of Colleges of Nursing. 1995, 11 (1): 49-57.

Coyle LA, Sokop AG: Innovation adoption behavior among nurses. Nursing Research. 1990, 39 (3): 176-180.

Michel Y, Sneed NV: Dissemination and use of research findings in nursing practice. Journal of Professional Nursing. 1995, 11 (5): 306-311. 10.1016/S8755-7223(05)80012-2.

Rutledge DN, Greene P, Mooney K, Nail LM, Ropka M: Use of research-based practices by oncology staff nurses. Oncology Nursing Forum. 1996, 23 (8): 1235-1244.

Squires JE, Moralejo D, LeFort SM: Exploring the role of organizational policies and procedures in promoting research utilization in registered nurses. Implementation Science. 2007, 2 (1):

Backer TE: Knowledge utilization: The third wave. Knowledge: Creation, Diffusion, Utilization. 1991, 12 (3): 225-240.

Butler L: Valuing research in clinical practice: a basis for developing a strategic plan for nursing research. The Canadian Journal of Nursing Research. 1995, 27 (4): 33-49.

Cobban SJ, Profetto-McGrath J: A pilot study of research utilization practices and critical thinking dispositions of Alberta dental hygienists. International Journal of Dental Hygiene. 2008, 6 (3): 229-237. 10.1111/j.1601-5037.2008.00299.x.

Connor N: The relationship between organizational culture and research utilization practices among nursing home departmental staff. Thesis. 2007, Dalhousie University

Estabrooks CA: Modeling the individual determinants of research utilization. Western Journal of Nursing Research. 1999, 21 (6): 758-772. 10.1177/01939459922044171.

Estabrooks CA, Kenny DJ, Adewale AJ, Cummings GG, Mallidou AA: A comparison of research utilization among nurses working in Canadian civilian and United States Army healthcare settings. Research in Nursing and Health. 2007, 30 (3): 282-296. 10.1002/nur.20218.

Profetto-McGrath J, Hesketh KL, Lang S, Estabrooks CA: A study of critical thinking and research utilization among nurses. Western Journal of Nursing Research. 2003, 25 (3): 322-337. 10.1177/0193945902250421.

Hansen HE, Biros MH, Delaney NM, Schug VL: Research utilization and interdisciplinary collaboration in emergency care. Academic Emergency Medicine. 1999, 6 (4): 271-279. 10.1111/j.1553-2712.1999.tb00388.x.

Hatcher S, Tranmer J: A survey of variables related to research utilization in nursing practice in the acute care setting. Canadian Journal of Nursing Administration. 1997, 10 (3): 31-53.

Karlsson U, Tornquist K: What do Swedish occupational therapists feel about research? A survey of perceptions, attitudes, intentions, and engagement. Scandinavian Journal of Occupational Therapy. 2007, 14 (4): 221-229. 10.1080/11038120601111049.

Kenny DJ: Nurses' use of research in practice at three US Army hospitals. Canadian Journal of Nursing Leadership. 2005, 18 (3): 45-67.

Lacey EA: Research utilization in nursing practice -- a pilot study. Journal of Advanced Nursing. 1994, 19 (5): 987-995. 10.1111/j.1365-2648.1994.tb01178.x.

McCleary L, Brown GT: Research utilization among pediatric health professionals. Nursing and Health Sciences. 2002, 4 (4): 163-171. 10.1046/j.1442-2018.2002.00124.x.

McCleary L, Brown GT: Use of the Edmonton research orientation scale with nurses. Journal of Nursing Measurement. 2002, 10 (3): 263-275. 10.1891/jnum.10.3.263.52559.

McCleary L, Brown GT: Association between nurses' education about research and their reseach use. Nurse Education Today. 2003, 23 (8): 556-565. 10.1016/S0260-6917(03)00084-4.

McCloskey DJ: The relationship between organizational factors and nurse factors affecting the conduct and utilization of nursing research. Thesis. 2005, George Mason University

Milner FM, Estabrooks CA, Humphrey C: Clinical nurse educators as agents for change: increasing research utilization. International Journal of Nursing Studies. 2005, 42 (8): 899-914. 10.1016/j.ijnurstu.2004.11.006.

Nash MA: Research utilization among Idaho nurses. Thesis. 2005, Gonzaga University

Ohrn K, Olsson C, Wallin L: Research utilization among dental hygienists in Sweden -- a national survey. International Journal of Dental Hygiene. 2005, 3 (3): 104-111. 10.1111/j.1601-5037.2005.00135.x.

Olade RA: Evidence-based practice and research utilization activities among rural nurses. Journal of Nursing Scholarship. 2004, 36 (3): 220-225. 10.1111/j.1547-5069.2004.04041.x.

Pelz DC, A HJ, Ciarlo JA: Measuring utilization of nursing research. Utilizing evaluation: Concepts and measurement techniques. Edited by: Anonymous. 1981, Beverly Hills, CA: Sage, 125-149.

Prin PL, Mills MD, Gerdin U: Nurses' MEDLINE usage and research utilization. Nursing informatics: the impact of nursing knowledge on health care informatics proceedings of NI'97, Sixth Triennial International Congress of IMIA-NI, Nursing Informatics of International Medical Informatics Association. Edited by: Amsterdam. 1997, Netherlands: IOS Press, 46: 451-456.

Profetto-McGrath J, Smith KB, Hugo K, Patel A, Dussault B: Nurse educators' critical thinking dispositions and research utilization. Nurse Education in Practice. 2009, 9 (3): 199-208. 10.1016/j.nepr.2008.06.003.

Sekerak DK: Characteristics of physical therapists and their work environments which foster the application of research in clinical practice. Thesis. 1992, The University of North Carolina at Chapel Hill

Stetler CB, DiMaggio G: Research utilization among clinical nurse specialists. Clinical Nurse Specialist. 1991, 5 (3): 151-155.

Wallin L, Bostrom A, Wikblad K, Ewald U: Sustainability in changing clinical practice promotes evidence-based nursing care. Journal of Advanced Nursing. 2003, 41 (5): 509-518. 10.1046/j.1365-2648.2003.02574.x.

Wells N, Baggs JG: A survey of practicing nurses' research interests and activities. Clinical Nurse Specialist: The Journal for Advanced Nursing Practice. 1994, 8 (3): 145-151. 10.1097/00002800-199405000-00009.

Rogers EM: Diffusion of Innovations. 1983, New York: The Free Press, 3

Rogers E: Diffusion of Innovations. 1995, New York: The Free Press, 4

Knott J, Wildavsky A: If dissemination is the solution, what is the problem?. Knowledge: Creation, Diffusion, Utilization. 1980, 1 (4): 537-578.

Titler MG, Kleiber C, Steelman VJ, Rakel BA, Budreau G, Everett LQ, Buckwalter KC, Tripp-Reimer T, Goode CJ: The Iowa Model of evidence-based practice to promote quality care. Critical Care Nursing Clinics of North America. 2001, 13 (4): 497-509.

Kitson A, Harvey G, McCormack B: Enabling the implementation of evidence based practice: a conceptual framework. Quality in Health Care. 1998, 7 (3): 149-158. 10.1136/qshc.7.3.149.

Logan J, Graham ID: Toward a comprehensive interdisciplinary model of health care research use. Science Communication. 1998, 20 (2): 227-246. 10.1177/1075547098020002004.

Abrahamson E, Rosenkopf L: Institutional and competitive bandwagons - Using mathematical-modeling as a tool to explore innovation diffusion. Academy of Management Review. 1993, 18 (3): 487-517.

Warner K: A 'desperation-reaction' model of medical diffusion. Health Services Research. 1975, 10 (4): 369-383.

Orlikowski WJ: Improvising organizational transformation over time: A situated change perspective. Information Systems Research. 1996, 7 (1): 63-92. 10.1287/isre.7.1.63.

Weiss C: The many meanings of research utilization. Public Administration Review. 1979, 39 (5): 426-431. 10.2307/3109916.

Forbes SA, Bott MJ, Taunton RL: Control over nursing practice: a construct coming of age. Journal of Nursing Measurement. 1997, 5 (2): 179-190.

Grasso AJ, Epstein I, Tripodi T: Agency-Based Research Utilization in a Residential Child Care Setting. Administration in Social Work. 1988, 12 (4): 61-

Morrow-Bradley C, Elliott R: Utilization of psychotherapy research by practicing psychotherapists. American Psychologist. 1986, 41 (2): 188-197.

Rardin DK: The Mesh of research Tand practice: The effect of cognitive style on the use of research in practice of psychotherapy. Thesis. 1986, University of Maryland College Park

Kamwendo K, Kamwendo K: What do Swedish physiotherapists feel about research? A survey of perceptions, attitudes, intentions and engagement. Physiotherapy Research International. 2002, 7 (1): 23-34. 10.1002/pri.238.

Hambleton R, Jones R: Comparison of classical test theory and item response theory and their applications to test development. Educational Measurement: Issues & Practice. 1993, 12 (3):

Ellis BB, Mead AD: Item analysis: Theory and practice using classical and modern test theory. Handbook of Research Methods in Industrial and Organizational Psychology. 2002, Blackwell Publications, 324-343.

Hambleton R, Swaminathan H, Rogers J: Fundamentals of Item Response Theory. Newbury Park, CA: Sage. 1991

Van der Linden W, Hambleton R: Handbook of Modern Item Response Theory. New York: Springer. 1997

Downing S: Validity: on the meaningful interpretation of assessment data. Medical Education. 2003, 37: 830-837. 10.1046/j.1365-2923.2003.01594.x.

Kane M: Validating high-stakes testing programs. Educational Measurement: Issues and Practice. 2002, Spring 2002: 31-41.

Kane MT: An argument-based approach to validity. Psychological Bulletin. 1992, 112 (3): 527-535.

Cronbach LJ, Meehl PE: Construct validity in psychological tests. Psychological Bulletin. 1955, 52: 281-302.

Duhem P: The Aim and Structure of Physical Theory. 1914, Princeton University Press

Lakatos I: Criticism and methodology of scientific research programs. Proceeding of the Aristotelian Society for the Systematic Study of Philosophy. 1968, 69: 149-186.

Feyerabend P: How to be a good empiricist - a plea for tolerance in matters epistemological. Philosophy of Science: The Central Issues. Edited by: Curd M, Cover J. 1963, New York: W.W. Norton & Company, 922-949.

Kuhn TS: The Structure of Scientific Revolutions. 1970, Chicago: University of Chicago Press, 2

Bostrom A, Wallin L, Nordstrom G: Research use in the care of older people: a survey among healthcare staff. International Journal of Older People Nursing. 2006, 1 (3): 131-140. 10.1111/j.1748-3743.2006.00014.x.

Bostrom AM, Wallin L, Nordstrom G: Evidence-based practice and determinants of research use in elderly care in Sweden. Journal of Evaluation in Clinical Practice. 2007, 13 (4): 665-10.1111/j.1365-2753.2007.00807.x.

Bostrom AM, Kajermo KN, Nordstrom G, Wallin L: Barriers to research utilization and research use among registered nurses working in the care of older people: Does the BARRIERS Scale discriminate between research users and non-research users on perceptions of barriers?. Implementation Science. 2008, 3 (1):

Humphris D, Hamilton S, O'Halloran P, Fisher S, Littlejohns P: Do diabetes nurse specialists utilise research evidence?. Practical Diabetes International. 1999, 16 (2): 47-50. 10.1002/pdi.1960160213.

Humphris D, Littlejohns P, Victor C, O'Halloran P, Peacock J: Implementing evidence-based practice: factors that influence the use of research evidence by occupational therapists. British Journal of Occupational Therapy. 2000, 63 (11): 516-222.

McCloskey DJ, McCloskey DJ: Nurses' perceptions of research utilization in a corporate health care system. Journal of Nursing Scholarship. 2008, 40 (1): 39-45. 10.1111/j.1547-5069.2007.00204.x.

Tranmer JE, Lochhaus-Gerlach J, Lam M: The effect of staff nurse participation in a clinical nursing research project on attitude towards, access to, support of and use of research in the acute care setting. Canadian Journal of Nursing Leadership. 2002, 15 (1): 18-26.

Pain K, Hagler P, Warren S: Development of an instrument to evaluate the research orientation of clinical professionals. Canadian Journal of Rehabilitation. 1996, 9 (2): 93-100.

Bonner A, Sando J: Examining the knowledge, attitude and use of research by nurses. Journal of Nursing Management. 2008, 16 (3): 334-343. 10.1111/j.1365-2834.2007.00808.x.

Henderson A, Winch S, Holzhauser K, De Vries S: The motivation of health professionals to explore research evidence in their practice: an intervention study. Journal of Clinical Nursing. 2006, 15 (12): 1559-1564. 10.1111/j.1365-2702.2006.01637.x.

Waine M, Magill-Evans J, Pain K: Alberta occupational therapisits' perspectives on and participation in research. Canadian Journal of Occupational Therapy. 1997, 64 (2): 82-88.

Aron J: The utilization of psychotherapy research on depression by clinical psychologists. Thesis. 1990, Auburn University

Brown DS: Nursing education and nursing research utilization: is there a connection in clinical settings?. Journal of Continuing Education in Nursing. 1997, 28 (6): 258-262. quiz 284

Parahoo K: A comparison of pre-Project 2000 and Project 2000 nurses' perceptions of their research training, research needs and of their use of research in clinical areas. Journal of Advanced Nursing. 1999, 29 (1): 237-245. 10.1046/j.1365-2648.1999.00882.x.

Parahoo K: Research utilization and attitudes towards research among psychiatric nurses in Northern Ireland. Journal of Psychiatric and Mental Health Nursing. 1999, 6 (2): 125-135. 10.1046/j.1365-2850.1999.620125.x.

Parahoo K, McCaughan EM: Research utilization among medical and surgical nurses: A comparison of their self reports and perceptions of barriers and facilitators. Journal of Nursing Management. 2001, 9 (1): 21-30. 10.1046/j.1365-2834.2001.00237.x.

Parahoo K, Barr O, McCaughan E: Research utilization and attitudes towards research among learning disability nurses in Northern Ireland. Journal of Advanced Nursing. 2000, 31 (3): 607-613. 10.1046/j.1365-2648.2000.01316.x.

Valizadeh L, Zamanzadeh V: Research in brief: Research utilization and research attitudes among nurses working in teaching hospitals in Tabriz, Iran. Journal of Clinical Nursing. 2003, 12: 928-930. 10.1046/j.1365-2702.2003.00798.x.

Veeramah V: Utilization of research findings by graduate nurses and midwives. Journal of Advanced Nursing. 2004, 47 (2): 183-191. 10.1111/j.1365-2648.2004.03077.x.

Callen JL, Fennell K, McIntosh JH: Attitudes to, and use of, evidence-based medicine in two Sydney divisions of general practice. Australian Journal of Primary Health. 2006, 12 (1): 40-46. 10.1071/PY06007.

Cameron KAV, Ballantyne S, Kulbitsky A, Margolis-Gal M, Daugherty T, Ludwig F: Utilization of evidence-based practice by registered occupational therapists. Occupational Therapy International. 2005, 12 (3): 123-136. 10.1002/oti.1.

Elliott V, Wilson SE, Svensson J, Brennan P: Research utilisation in sonographic practice: Attitudes and barriers. Radiography. 2008

Erler CJ, Fiege AB, Thompson CB: Flight nurse research activities. Air Medical Journal. 2000, 19 (1): 13-18. 10.1016/S1067-991X(00)90086-5.

Logsdon C, Davis DW, Hawkins B, Parker B, Peden A: Factors related to research utilization by registered nurses in Kentucky. Kentucky Nurse. 1998, 46 (1): 23-26.

Miller JP: Speech-language pathologists' use of evidence-based practice in assessing children and adolescents with cognitive-communicative disabilities: a survey. Thesis. 2007, Eastern Washington University

Nelson TD, Steele RG: Predictors of practitioner self-reported use of evidence-based practices: Practitioner training, clinical setting, and attitudes toward research. Administration and Policy in Mental Health and Mental Health Services Research. 2007, 34 (4): 319-330. 10.1007/s10488-006-0111-x.

Ofi B, Sowunmi L, Edet D, Anarado N, Ofi B, Sowunmi L, Edet D, Anarado N: Professional nurses' opinion on research and research utilization for promoting quality nursing care in selected teaching hospitals in Nigeria. International Journal of Nursing Practice. 2008, 14 (3): 243-255. 10.1111/j.1440-172X.2008.00684.x.

Oliveri RS, Gluud C, Wille-Jorgensen PA: Hospital doctors' self-rated skills in and use of evidence-based medicine - a questionnaire survey. Journal of Evaluation in Clinical Practice. 2004, 10 (2): 219-10.1111/j.1365-2753.2003.00477.x.

Sweetland J, Craik C: The use of evidence-based practice by occupational therapists who treat adult stroke patients. British Journal of Occupational Therapy. 2001, 64 (5): 256-260.

Veeramah V: A study to identify the attitudes and needs of qualified staff concerning the use of research findings in clinical practice within mental health care settings. Journal of Advanced Nursing. 1995, 22 (5): 855-861.

Wood CK: Adoption of innovations in a medical community: The case of evidence-based medicine. Thesis. 1996, University of Hawaii

Wright A, Brown P, Sloman R: Nurses' perceptions of the value of nursing research for practice. Australian Journal of Advanced Nursing. 1996, 13 (4): 15-18.

Download references

Acknowledgements

This project was made possible by the support of the Canadian Institutes of Health Research (CIHR) Knowledge Translation Synthesis Program (KRS 86255). JES is supported by CIHR postdoctoral and Bisby fellowships. CAE holds a CIHR Canada Research Chair in Knowledge Translation. HMO holds Alberta Heritage Foundation for Medical Research (AHFMR) and KT Canada (CIHR) doctoral scholarships. PG holds a grant from AFA Insurance, and LW is supported by the Center for Care Sciences at Karolinska Institutet. We would like to thank Dagmara Chojecki, MLIS for her support in finalizing the search strategy.

Author information

Authors and affiliations.

Clinical Epidemiology Program, Ottawa Hospital Research Institute, Ottawa, Canada

Janet E Squires

Faculty of Nursing, University of Alberta, Edmonton, Canada

Carole A Estabrooks, Hannah M O'Rourke & Christine V Newburn-Cook

Department of Clinical Neuroscience (Division of Psychology), Karolinska Institutet, Stockholm, Sweden

Petter Gustavsson

Department of Neurobiology, Care Sciences and Society (Division of Nursing), Karolinska Institutet, Stockholm, Sweden

Lars Wallin

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Janet E Squires .

Additional information

Competing interests.

The authors declare that they have no competing interests.

Authors' contributions

JES, CAE, PG, and LW participated in designing the study and securing funding for the project. JES, CAE, HMO, PG, and LW participated in developing the search strategy, study relevance, and data extraction tools. JES and HMO undertook the article selection and data extraction. All authors participated in data synthesis. JES drafted the manuscript. All authors provided critical commentary on the manuscript and approved the final version.

Electronic supplementary material

13012_2010_399_moesm1_esm.pdf.

Additional File 1: Search Strategy. This file contains the details of the search strategy used for the review. (PDF 74 KB)

13012_2010_399_MOESM2_ESM.PDF

Additional File 2: Exclusion List by Reason (N = 393). This file contains a list of the retrieved articles that were excluded from the review and the reason each article was excluded. (PDF 412 KB)

13012_2010_399_MOESM3_ESM.PDF

Additional File 3: The Standards . This file contains an overview of the Standards for Educational and Psychological Testing Validity Framework and sample predictions used to assess 'relations to other variables' validity evidence according to this framework. (PDF 192 KB)

13012_2010_399_MOESM4_ESM.PDF

Additional File 4: Description of Other Specific Practices Indices and Other General Research Use Indices. This file contains a description of the four measures included in the class 'Other Specific Practices Indices' and the ten measures included in the class 'Other General Research Use Indices'. (PDF 112 KB)

13012_2010_399_MOESM5_ESM.PDF

Additional File 5: Reported Reliability of Self-Report Research Utilization Measures. This file contains the reliability coefficients reported in the included studies. (PDF 85 KB)

13012_2010_399_MOESM6_ESM.PDF

Additional file 6: Supporting Validity Evidence by Self-Report Research Utilization Measure. This file contains the detailed validity evidence on each included self-report research utilization measure. (PDF 355 KB)

Authors’ original submitted files for images

Below are the links to the authors’ original submitted files for images.

Authors’ original file for figure 1

Authors’ original file for figure 2, rights and permissions.

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License ( http://creativecommons.org/licenses/by/2.0 ), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article.

Squires, J.E., Estabrooks, C.A., O'Rourke, H.M. et al. A systematic review of the psychometric properties of self-report research utilization measures used in healthcare. Implementation Sci 6 , 83 (2011). https://doi.org/10.1186/1748-5908-6-83

Download citation

Received : 20 August 2010

Accepted : 27 July 2011

Published : 27 July 2011

DOI : https://doi.org/10.1186/1748-5908-6-83

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Item Response Theory
  • Research Utilization
  • Validity Evidence
  • Guideline Adherence
  • Knowledge Utilization

Implementation Science

ISSN: 1748-5908

  • Submission enquiries: Access here and click Contact Us
  • General enquiries: [email protected]

psychological research self report

Cognitive psychology and self-reports: Models and methods

  • Published: May 2003
  • Volume 12 , pages 219–227, ( 2003 )

Cite this article

  • Jared B. Jobe 1  

1899 Accesses

135 Citations

Explore all metrics

This article describes the models and methods that cognitive psychologists and survey researchers use to evaluate and experimentally test cognitive issues in questionnaire design and subsequently improve self-report instruments. These models and methods assess the cognitive processes underlying how respondents comprehend and generate answers to self-report questions. Cognitive processing models are briefly described. Non-experimental methods – expert cognitive review, cognitive task analysis, focus groups, and cognitive interviews – are described. Examples are provided of how these methods were effectively used to identify cognitive self-report issues. Experimental methods – cognitive laboratory experiments, field tests, and experiments embedded in field surveys – are described. Examples are provided of: (a) how laboratory experiments were designed to test the capability and accuracy of respondents in performing the cognitive tasks required to answer self-report questions, (b) how a field experiment was conducted in which a cognitively designed questionnaire was effectively tested against the original questionnaire, and (c) how a cognitive experiment embedded in a field survey was conducted to test cognitive predictions.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price includes VAT (Russian Federation)

Instant access to the full article PDF.

Rent this article via DeepDyve

Institutional subscriptions

Similar content being viewed by others

psychological research self report

What is Qualitative in Qualitative Research

Patrik Aspers & Ugo Corte

A new criterion for assessing discriminant validity in variance-based structural equation modeling

Jörg Henseler, Christian M. Ringle & Marko Sarstedt

psychological research self report

Questionnaire Design

Stone AA, Turkkan JS, Bachrach CA, Jobe JB, Kurtzman HS, Cain VS (eds). The Science of Self-report: Implications for Research and Practice. Mahwah, NJ: Erlbaum, 2000.

Google Scholar  

Rubin DC (ed). Autobiographical Memory. Cambridge, UK: Cambridge University Press, 1986.

Jobe JB, Mingay DJ. Cognition and survey measurement: History and overview. Appl Cognit Psychol 1991; 5: 175–192.

Jobe JB, Tourangeau R, Smith AF. Contributions of survey research to the understanding of memory. Appl Cognit Psychol 1993; 7: 567–584.

Sudman S, Bradburn N, Schwarz N. Thinking about Answers: The Application of Cognitive Processes to Survey Methodology. San Francisco: Jossey-Bass, 1995.

Tourangeau R, Rips LJ, Rasinski K. The Psychology of Survey Response. New York: Cambridge University Press, 2000.

Jobe JB, Herrmann DJ. Implications of models of survey cognition for memory theory. In: Herrmann D, Johnson M, McEvoy C, Hertzog C, Hertel P (eds), Basic and Applied Memory Research: Vol. 2. Practical Applications. Hillsdale, NJ: Erlbaum, 1996; 193–205.

Tourangeau R. Cognitive sciences and survey methods. In: Jabine TB, Straf ML, Tanur JM, Tourangeau R (eds), Cognitive Aspects of Survey Methodology: Building a Bridge between Disciplines. Washington, DC: National Academy Press, 1984; 73–101.

Willis GB, Royston P, Bercini D. The use of verbal report methods in the development and the testing of survey questionnaires. Appl Cognit Psychol 1991; 5: 251–267.

Esposito JL, Jobe JB. A general model of the survey interaction process. Bureau of the Census Seventh Annual Research Conference Proceedings 1991; 537–560.

Forsyth BH, Lessler JT. Cognitive laboratory methods: a taxonomy. In: Biemer PP, Groves RM, Lyberg LE, Mathiowetz NA, Sudman S (eds), Measurement Errors in Surveys. New York: Wiley, 1991; 167–183.

Lessler JT, Forsyth BH. A coding system for appraising questionnaires. In: Schwarz N, Sudman S (eds), Answering Questions: Methodology for Determining Cognitive and Communicative Processes in Survey Research. San Francisco: Jossey-Bass, 1996; 259–291.

Lee L, Brittingham A, Tourangeau R, et al. Are reporting errors due to encoding limitations or retrieval failure? Surveys of child vaccination as a case study. Appl Cognit Psychol 1999; 13: 43–63.

Smith AF. Cognitive processes in long-term dietary recall. Vital Health Stat 6. 1991; 4: 1–34.

Bercini DH. Pretesting questionnaires in the laboratory: an alternative approach. J Expo Anal Environ Epidemiol 1992; 2: 241–248.

Pratt WF, Tourangeau R, Jobe JB, et al. Asking sensitive questions in a health survey. Vital Health Stat 6 (in press).

Sudman S, Warnecke R, Johnson T, et al. Cognitive aspects of reporting cancer prevention examinations and tests. Vital Health Stat 6. 1994; 7: 1–171.

Willis GB. The use of the psychological laboratory to study sensitive survey topics. In: Harrison L, Hughes A (eds), The Validity of Self-reported Drug Use: Improving the Accuracy of Survey Estimates. Rockville, MD: National Institute on Drug Abuse, 1997; 416–438.

Jobe JB, Mingay DJ. Cognitive laboratory approach to designing questionnaires for surveys of the elderly. Public Health Rep 1990; 105: 518–524.

Keller DM, Kovar MG, Jobe JB, Branch LG. Problems eliciting elders' reports of functional status. J Aging Health 1993; 5: 306–318.

Schechter S, Herrmann D. The proper use of self-report questions in effective measurement of health outcomes. Eval Health Prof 1997; 20: 28–46.

Subar AF, Thompson FE, Smith AF, et al. Improving food frequency questionnaires: A qualitative approach using cognitive interviews. J Am Diet Assoc 1995; 95: 781–788.

Smith AF, Jobe JB, Mingay DJ. Question-induced cognitive biases in reports of dietary intake by college men and women. Health Psychol 1991; 10: 244–251.

Smith AF, Jobe JB, Mingay DJ. Retrieval from memory of dietary information. Appl Cognit Psychol 1991; 5: 269–296.

Means B, Swan GE, Jobe JB, Esposito JL. An alternative approach to obtaining personal history data. In: Biemer PP, Groves RM, Lyberg LE, Mathiowetz NA, Sudman S (eds), Measurement Errors in Surveys. New York: Wiley, 1991; 167–183.

Thompson FE, Subar AF, Brown CC, et al. Cognitive research enhances accuracy of food frequency questionnaire reports: Results of an experimental validation study. J Am Diet Assoc 2002; 102: 212–218, 223-225.

Jobe JB, White AA, Kelley CL, et al. Recall strategies and memory for health care visits. Milbank Q 1991; 68: 171–189.

Brewer WF. Autobiographical memory and survey research. In: Schwarz N, Sudman S (eds), Autobiographical Memory and the Validity of Retrospective Reports. New York: Springer-Verlag, 1994; 11–20.

Wagenaar WA. My memory: A study of autobiographical memory over six years. Cognit Psychol 1986; 18: 225–252.

Drury CG, Paramore B, Van Gott HP, et al. Task analysis. In: Salvendy G (ed), Handbook of Human Factors. New York: Wiley, 1987; 370–401.

Krueger RA. Focus Groups: A Practical Guide for Applied Research. Thousand Oaks, CA: Sage, 1994.

Jobe JB, Mingay DJ. Cognitive research improves questionnaires. Am J Public Health 1989; 79: 1053–1055.

Means B, Loftus EF. When personal history repeats itself: Decomposing memories for recurring events. Appl Cognit Psychol 1991; 5: 297–318.

Tourangeau R, Rasinski KA. Cognitive processes underlying context effects in attitude measurement. Psychol Bull 1989; 103: 299–314.

Jenkins CR, Dillman DA. Towards a theory of self-administered questionnaire design. In: Lyberg L, Biemer P, Collins M, de Leeuw E, Dippo C, Schwarz N, Trewin D (eds), Survey Measurement and Process Quality. New York: Wiley, 1997; 165–196.

Mullen PA, Lohr KN, Bresnahan BW, McNulty P. Applying cognitive design principles to formatting HRQOL instruments. Qual Life Res 2000; 9: 13–27.

Download references

Author information

Authors and affiliations.

National Heart, Lung, and Blood Institute, Bethesda, MD, USA

Jared B. Jobe

You can also search for this author in PubMed   Google Scholar

Rights and permissions

Reprints and permissions

About this article

Jobe, J.B. Cognitive psychology and self-reports: Models and methods. Qual Life Res 12 , 219–227 (2003). https://doi.org/10.1023/A:1023279029852

Download citation

Issue Date : May 2003

DOI : https://doi.org/10.1023/A:1023279029852

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Autobiographical memory
  • Cognitive interviews
  • Focus groups
  • Information processing models
  • Find a journal
  • Publish with us
  • Track your research

psychological research self report

Final dates! Join the tutor2u subject teams in London for a day of exam technique and revision at the cinema. Learn more →

Reference Library

Collections

  • See what's new
  • All Resources
  • Student Resources
  • Assessment Resources
  • Teaching Resources
  • CPD Courses
  • Livestreams

Study notes, videos, interactive activities and more!

Psychology news, insights and enrichment

Currated collections of free resources

Browse resources by topic

  • All Psychology Resources

Resource Selections

Currated lists of resources

Study Notes

Self-Report Techniques

Last updated 22 Mar 2021

  • Share on Facebook
  • Share on Twitter
  • Share by Email

Self-report techniques describe methods of gathering data where participants provide information about themselves without interference from the experimenter.

Such techniques can include questionnaires, interviews, or even diaries, and ultimately will require giving responses to pre-set questions.

Evaluation of self-report methods

- Participants can be asked about their feelings and cognitions (i.e. thoughts), which can be more useful than simply observing behaviour alone.

- Scenarios can be asked about hypothetically without having to physically set them up and observe participants’ behaviour.

Weaknesses:

- Gathering information about thoughts or feelings is only useful if participants are willing to disclose them to the experimenter.

- Participants may try to give the ‘correct’ responses they think researchers are looking for (or deliberately do the opposite), or try to come across in most socially acceptable way (i.e. social desirability bias), which can lead to giving untruthful responses.

  • Self-report techniques
  • Design of questionnaires and interviews
  • Questionnaire

You might also like

Model answer for question 3 paper 1: as psychology, june 2016 (aqa).

Exam Support

Questionnaires​

A level psychology topic quiz - research methods.

Quizzes & Activities

Research Methods - Self Report Techniques

Model answer for question 11 paper 2: as psychology, june 2016 (aqa), example answer for question 10 paper 1: as psychology, june 2017 (aqa), ​example answer for question 20 paper 2: a level psychology, june 2017 (aqa), our subjects.

  • › Criminology
  • › Economics
  • › Geography
  • › Health & Social Care
  • › Psychology
  • › Sociology
  • › Teaching & learning resources
  • › Student revision workshops
  • › Online student courses
  • › CPD for teachers
  • › Livestreams
  • › Teaching jobs

Boston House, 214 High Street, Boston Spa, West Yorkshire, LS23 6AD Tel: 01937 848885

  • › Contact us
  • › Terms of use
  • › Privacy & cookies

© 2002-2024 Tutor2u Limited. Company Reg no: 04489574. VAT reg no 816865400.

ORIGINAL RESEARCH article

Voices in methodology: analyzing self-mention markers in english and persian psychology research articles provisionally accepted.

  • 1 Allameh Tabataba'i University, Iran

The final, formatted version of the article will be published soon.

Although previous preconceived notions discourage authors from asserting their presence in research articles (RAs), recent studies have substantiated that the use of self-mention markers offer a means to establish authorial identity and recognition in a given discipline. Few studies, however, explored specific sections of research articles to uncover how self-mentions function within each section's conventions. Exploring the use of self-mention markers, the present study aimed at comparing the method sections written by native English writers and L-1 Persian writers in the field of psychology. The corpus contained 120 RAs, with each sub-corpora including 60 RAs. The RAs were then examined structurally and functionally. The data were analyzed both quantitatively, using frequency counts and chi-square analyses, and qualitatively through content analysis. The findings indicated a significant difference between English and Persian authors concerning the frequency of self-mentions and the dimension of rhetorical functions; however, the differences in the dimensions of grammatical forms and hedging and boosting were found insignificant. Native English authors were inclined to make more use of self-mentions in their research articles. The findings of the current study can assist EAP and ESP novice researchers in taking cognizance of the conventions of authorial identity in each genre.

Keywords: Academic writings, Authorial identity, disciplinary differences, method section, selfmention markers

Received: 13 Nov 2023; Accepted: 03 Apr 2024.

Copyright: © 2024 Moradi and Montazeri. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY) . The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

* Correspondence: Mx. Fatemeh Moradi, Allameh Tabataba'i University, Tehran, Iran

People also looked at

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • HHS Author Manuscripts

Logo of nihpa

Measuring bias in self-reported data

Robert rosenman.

School of Economic Sciences, Washington State University, P.O. Box 646210, Pullman, WA 99164-6210, USA

Vidhura Tennekoon

School of Economic Sciences, Washington State University, P.O. Box 646210, Pullman, WA 99164-6210, USA ude.usw@aruhdiv

Laura G. Hill

Department of Human Development, Washington State University, 523 Johnson Tower, Pullman WA 99164, USA ude.usw@lliharual

Response bias shows up in many fields of behavioural and healthcare research where self-reported data are used. We demonstrate how to use stochastic frontier estimation (SFE) to identify response bias and its covariates. In our application to a family intervention, we examine the effects of participant demographics on response bias before and after participation; gender and race/ethnicity are related to magnitude of bias and to changes in bias across time, and bias is lower at post-test than at pre-test. We discuss how SFE may be used to address the problem of ‘response shift bias’ – that is, a shift in metric from before to after an intervention which is caused by the intervention itself and may lead to underestimates of programme effects.

1 Introduction

In this paper, we demonstrate the potential of a common econometric tool, stochastic frontier estimation (SFE), to measure response bias and its covariates in self-reported data. We illustrate the approach using self-reported measures of parenting behaviours before and after a family intervention. We demonstrate that in addition to affecting targeted behaviours, an intervention may also affect any bias associated with self-assessment of those behaviours. We show that SFE can be used to identify and correct for bias in self-assessment both before and after treatment, resulting in more accurate estimates of treatment effects.

Response bias is a widely discussed phenomenon in behavioural and healthcare research where self-reported data are used; it occurs when individuals offer self-assessed measures of some phenomenon. There are many reasons individuals might offer biased estimates of self-assessed behaviour, ranging from a misunderstanding of what a proper measurement is to social-desirability bias, where the respondent wants to ‘look good’ in the survey, even if the survey is anonymous. Response bias itself can be problematic in programme evaluation and research, but is especially troublesome when it causes a recalibration of bias after an intervention. Recalibration of standards can cause a particular type of measurement bias known as ‘response-shift bias’ ( Howard, 1980 ). Response-shift bias occurs when a respondent's frame of reference changes across measurement points, especially if the changed frame of reference is a function of treatment or intervention, thus, confounding the treatment effect with bias recalibration. More specifically, an intervention may change respondents’ understanding or awareness of the target concept and the estimation of their level of functioning with respect to the concept ( Sprangers and Hoogstraten, 1989 ), thus changing the bias at each measurement point. In fact, some treatments or interventions are intended to change how respondents look at the target concept. Further complicating matters is that an intervention may affect not only a respondent's metric for targeted behaviours across time points (resulting in response shift bias) but may also affect other types of response bias. For example, social desirability bias may decrease over the course of an intervention as respondents come to know and trust a service provider. Thus, it is necessary to understand the degree and type of response bias at both pretest and posttest in order to determine whether response shift has occurred.

When there is a potential for confusing bias recalibration with treatment outcomes, statistical approaches may be useful ( Schwartz and Sprangers, 1999 ). In recent years, researchers have applied structural equation modelling (SEM) to the problem of decomposing error in order to identify response shift bias ( Oort, 2005 ; Oort et al., 2005 ). In this paper, we suggest a different statistical approach which reveals response bias at a single time point as well as differences in bias across time points. Perhaps more importantly, it identifies covariates of these differences. When applied before and after an intervention, it reveals differences related to changes in respondents’ frame of reference. Thus, it can be used to decompose errors so that recalibration of the bias occurring across time points can be distinguished from simple response bias within each time point. The suggested approach is based on SFE ( Aigner et al., 1977 ; Battese and Coelli, 1995 ; Meeusen and van den Broeck, 1977 ), a technique widely used in economics and operational research.

Our approach has two significant advantages over that proposed by Oort et al. (2005) . Their approach reveals only aggregate changes in the responses and requires a minimum of two temporal sets of observations on the self-rating of interest as well as multiple measures of the item to be rated. SFE, to its credit, can identify response differences across individuals (as opposed to simply aggregate response shifts) with a single temporal observation and a single measure, so is much less data intensive. Moreover, since it identifies differences at the individual level, it allows the analyst to identify not only that responses differ by individual, but what characteristics are at the root of the differences. Thus, as long as more than one temporal observation is available for respondents, SFE can be used to systematically identify different types of response recalibration by looking at the changes at the individual level, and aggregating them. SFE again has an advantage because the causes of both bias and recalibration can be identified at the individual level.

What may superficially be seen as two disadvantages to SFE when compared to SEM approaches are actually common to both methods. First, both measure response (and therefore response shift) against a common subjective metric established by the norm of the data. In fact, any systematic difference by an individual from this norm is how we measure ‘response bias’. With both SEM and SFE, if an objective metric exists, the difference between the self-rating and the objective measure is easily established. A second apparent disadvantage is that SFE requires a specific assumption of a truncated distribution of the bias (although it is possible to test this assumption statistically). While SEM can reveal response shift on individual bias without such a strong assumption, aggregate changes become manifest only if “many respondents experience the same shift in the same direction” [ Oort, (2005) , p.595]. Hence, operationally the assumptions are nearly equivalent.

In next section, we explain how we model response bias and response recalibration within the SFE framework. In Section 3, we present our empirical application including the results of our baseline model and a model with heteroscedastic errors as a robustness check. In Section 4, we discuss the relative merits of the method we propose, together with its limitations and offer some conclusions.

2 Response bias and SFE

We are concerned with situations where individuals do not have an objective measure of some variable of interest which we denote Y * it , and we have to use a subjective measure (denoted Y it ) as a proxy instead. An unbiased estimate of the variable of interest Y * it can be defined as,

where Y it denotes the observed measurement, Y * it is the true attribute being measured and Z it represents variables other than Y * it . When Y it is self-reported Z it includes (often unobserved) variables affecting the frame of reference used by respondents for measuring Y * it and (1) is not assured. Within this context, response bias is simply the case that Y it | Y * it , Z it ≠ Y it | Y * it . The bias is upward if Y it | Y * it , Z it > Y it | Y * it and downward if the inequality goes the other way.

Our approach for measuring response bias and bias recalibration (change in response bias between two time periods) is based on the Battese and Coelli (1995) adaptation of the stochastic frontier model (SFE) independently proposed by Aigner et al. (1977) , and Meeusen and van den Broeck (1977) . Let

where Y i t ∗ is the true (latent) outcome, T denotes some treatment or intervention, 1 X it are variables other than the treatment that explain the outcome and ε i is a random error term. For identification, we assume that ε it is distributed iid N ( 0 , σ ε 2 ) . The observed self-reported outcome is a combination of true outcome and the response bias Y i t R .

We consider the specific case that the bias term Y i t R has a truncated-normal distribution

where u it is a random variable which accounts for response shift away from a subjective norm response level (usually called the ‘frontier’ in SFE) and is distributed N ( μ i t , σ u 2 ) independent of ε it . Moreover,

where the vector z it includes variables (other than the treatment) that explain the specific deviation from the response frontier. Subscript i indexes the individual observation and, subscript t denotes time. 2 Substituting (2), (4) and (5) in (3) we can write,

where φ(.) and Φ(.) are the standard normal probability density function and cumulative probability functions, respectively. Any treatment effect is given by β 0 in equation (6) . The normal relationship between the Xs and Y are given by β t . The last three terms on the right hand side represent the observation-specific response bias from this normal relationship. Treatment can affect both the maximum possible value of the measured outcome of a given individual (as defined by X it β t ), and the response bias. If treatment changes the response bias it will be indicated by the term δ 0 and the bias recalibration is given by

The estimated δ 0 coefficient on treatment indicates how treatment has changed response bias. If δ 0 = 0 there is no recalibration and the response bias, if it exists, is not affected by the treatment. Cross terms of treatment and other variables (that is, slope dummy variables) may be used if the treatment is thought to change the general way these other variables interact with functioning.

Recalibration can occur independently of the treatment effect. In fact, recalibration is sometimes a goal of the treatment or intervention in addition to the targeted outcome, which means a desired outcome is that δ ≠ 0 and Y i 1 | Y * it ≠ Y i 2 | Y * it for t ∈{1,2}. In other words, there is a change in individual measurement scale caused (and intended) by the intervention.

3 An application to evaluation of a family intervention

We applied SFE to examine response bias and recalibration in programme evaluations of a popular, evidence-based family intervention (the Strengthening Families Program for Parents and Youth 10–14, or SFP) ( Kumpfer et al., 1996 ). Families attend SFP once a week for seven weeks and engage in activities designed to improve family communication, decrease harsh parenting practices, and increase parents’ family management skills. At the beginning and end of a programme, parents report their level of agreement with various statements related to skills and behaviours targeted by the intervention (e.g., ‘I have clear and specific rules about my child's association with peers who use alcohol’). Consistent with the literature on response shift, we hypothesised that non-random bias would be greater at pretest than at posttest as parents changed their standards about intervention-targeted behaviours and became more conservative in their self-ratings. In other words, we expected that after the intervention parents would recalibrate their self-ratings downward, resulting in an underestimate of the programme's effects.

Our data consisted of 1437 parents who attended 94 SFP cycles in Washington State and Oregon from 2005 through 2009. 25% of the participants identified themselves as male, 72% as female, and 3% did not report gender. 27% of the participants identified themselves as Hispanic/Latino, 60% as White, 2% as Black, 4% as American Indian/Alaska Native, 3% as other or multiple race/ethnicity, and 3% did not report race/ethnicity. Almost 74% of the households included a partner or spouse of the attending parent, and 19% reported not having a spouse or partner. For almost 8% of the sample, the presence of a partner or spouse is unknown. Over 62% of our observations are from Washington State, with the remainder from Oregon.

3.2 Measures

The outcome measure consisted of 13 items assessing parenting behaviours targeted by the intervention, including communication about substance use, general communication, involvement of children in family activities and decisions, and family conflict. Items were designed by researchers of the programme's efficacy trial and information about the scale has been reported on elsewhere ( Spoth et al., 1995 ; Spoth et al., 1998 ). Cronbach's alpha (a measure of internal consistency) in the current data was .85 at both pretest and posttest. Items were scored on a 5-point Likert-type scale ranging from 1 (‘strongly disagree’) to 5 (‘strongly agree’).

Variables used in the analysis, including definitions and summary statistics, are presented in Table 1 . The average family functioning, as measured by the change in self-assessed parenting behaviours from the pretest to the posttest, increased from 3.98 to 4.27 after participation in SFP.

Variable names, descriptions and summary statistics

3.3 Procedure

Pencil-and-paper pretests were administered as part of a standard, ongoing programme evaluation on the first night of the programme, before programme content was delivered; posttests were administered on the last night of the programme. All data are anonymous; names of programme participants are not linked to programme evaluations and are unknown to researchers. The Institutional Review Board of Washington State University issued a Certificate of Exemption for the procedures of the current study.

We used SFE to estimate (pre- and post-treatment) family functioning scores as a function primarily of demographic characteristics. Based on previous literature ( Howard and Dailey, 1979 ), we hypothesised that the one-sided errors (response bias) would be downward, and preliminary analysis supported that hypothesis. 3 Additional preliminary analysis of which variables to include among z i (including a model using all the explanatory variables) led us to conclude that three variables determined the level of bias in the family functioning assessment – age, Latino/Hispanic ethnicity, and whether or not the functioning measure was a pretest or posttest assessment. We used the ‘xtfrontier’ routine in Stata to estimate the parameters of our models. Unlike the applications of SFE to technical efficiency estimation our model does not require log transforming the dependent variable.

3.4 The baseline model

The results of the baseline SFE model are shown in Table 2 . The Wald χ 2 statistic indicated that the regression was highly significant. Several demographic variables were found to influence the assessment of family functioning with conventional statistical significance. Males gave lower estimates of family functioning than did females and those with unreported gender. All non-White ethnic groups (and those with unreported race/ethnicity) assessed their family's functioning more highly than did White respondents. Participation in the Strengthening Families Program increased individuals’ assessments of their family's functioning.

SFE - total effects model

We assessed bias, and its change, from the coefficient estimates for the δ parameters where μ i = z i δ. Our first overall question was if, in fact, there was a one-sided error. Three measures of unexplained variation are shown in Table 2 : σ 2 = E (ε i – u i ) 2 is the variance of the total error, which can be broken down into component parts, σ u 2 = E ( u i 2 ) and σ ε 2 = E ( ε i 2 ) . The statistic γ = σ u 2 σ u 2 + σ ε 2 gives the percent of total unexplained variation attributable to the one-sided error. To ensure 0 ≤ γ ≤ 1 the model was parameterised as the inverse logit of γ and reported as inlgtgamma. Similarly, the model estimated the natural log of σ 2 , reported as lnsigma2, and used these estimates to derive σ 2 , σ ε 2 , σ u 2 and γ. As seen in the table the estimates for inlgtgamma was highly significant but the estimate for lnsigma2 had a p-value of 0.317, which means we cannot reject a hypothesis that all of the variation in the responses is due to respondent-specific bias. Hence, we found strong support for the one-sided variation that we call bias, and we saw that by far the most substantial portion of the unexplained variation in our data came from that source.

Three variables explained the level of bias. Latino/Hispanic respondents on average had more biased estimates of their family functioning. Looking again at equation (3) , we see that this means they, relative to other ethnic groups, underestimated their family functioning. However, we found that older participants had smaller biases, thus giving closer estimates of their family's relative functioning. Of primary interest is the estimate of the treatment effect. Participation in SFP strongly lowered the bias, on average.

3.5 Decomposing the measured change in functioning

The total change in the functioning score averaged 0.295. This total change consisted of two parts as indicated by the following:

Total change = Measured prescore − Measured postscore = (Real prevalue − Prevalue bias) − (Real postvalue − Postvalue bias) = Real change − (Postvalue bias − Prevalue bias) The term in parentheses is negative (the estimation indicates that treatment lowered the bias). Thus, the total change in the family functioning score underestimated the improvement due to SFP, although the measured post-treatment family functioning was not as large as it would seem from the reported family functioning scores, on average. Table 3 shows the average estimated bias by pre- and post-treatment, and the average change in bias, which was –0.133. Thus, the average improvement in family functioning was underestimated by this amount.

Averages of bias and change

Table 4 shows the results of a regression on bias change and demographic and other characteristics. Males and Black respondents had marginally larger bias changes, while those with race/ethnicity unreported had smaller bias changes. Since the bias change was measured as postscore bias minus prescore bias, this means that the bias changed less, on average, for male and Black respondents, but more, on average, for those whose race was unreported.

Regression of bias change

3.6 The SFE model with heteroscedastic error

One alternative to our baseline model (known as the total effects model in SFE terminology) which generated the results in Table 2 is a SFE model which allows for heteroscedasticity in ε i , u i , or both. More precisely, for this model, we maintained equation (3) but had E (ε 2 ) = ω ε w i and E ( u ) = ω u w i where ω ε and ω u are parameters to be estimated and w i are variables that explain the heteroscedasticity. We note that w i need not be the same in the two expressions, but since elements of ω ε and ω u can be zero we lose no generality by showing it as we do, and in fact in our application we used the same variables in both expressions, those that we used to explain μ in the first model. Table 5 reports the results of such a model. In this case, the one-sided error we ascribe to bias is evident from statistically significant parameters in the explanatory expressions for σ u 2 .

SFE with heteroscedasticity

We note first that the estimates in the main body of the equation were quantitatively and qualitatively very similar to those for the non-heteroscedastic SFE model. The only substantive change is that age was no longer significant at an acceptable p-value, and race unreported had a p-value of 0.1. All signs and magnitudes were similar. Once again, results indicated that participation in SFP (treatment) strongly improved functioning. Additionally, treatment lowered the variability of both sources of unexplained variation across participants. Th e decreased unexplained variation due to ε is likely explained by individuals having a better idea of the constructs assessed by scale items. For our purposes, the key statistic here is the coefficient of treatment explaining σ u 2 . The estimated parameter was negative and significant with a p-value = 0.03. Since the bias was one-sided we clearly can conclude that going through SFP lowered the variability of the bias significantly. Moreover, these estimates can be used to predict the bias of each observation, and with this model the average bias fell from 0.545 to 0.492, so while the biases were larger with this model, the decrease in the average (–0.63) was about one-half the decrease we saw in the first model.

4 Discussion and conclusions

As we noted earlier, bias in self-rating is of concern in a variety of research areas. In particular, the potential for recalibration of self-rating bias as a function of material or skills learned in an intervention has long been a concern to programme evaluators as it may result in underestimates of programme effectiveness ( Howard and Dailey, 1979 ; Norman, 2003 ; Pratt et al., 2000 ; Sprangers, 1989 ). However, in the absence of an objective performance measurement, it has not been possible to determine whether lower posttest scores truly represent response-shift bias or instead an actual decrement in targeted behaviours or knowledge (i.e., an iatrogenic effect of treatment). By allowing evaluators to test for a decrease in response bias from pretest to posttest, SFE provides a means of resolving this conundrum.

The SFE method, however, is not without problems. The main limitation is that the estimates rely on assumptions about the distributions of the two error components. Model identification requires that one of the error terms, the bias term in our application, to be one-sided. This, however, is not as strong an assumption as it looks, for two reasons. First, often there is prior information or theory that indicates the most likely direction for the bias. Second, the validity of the assumption can be tested statistically.

We presented SFE as a method to identify response bias and changes in response bias, within the context of self-reported measurements at individual and aggregate levels. Even though we proposed a novel application, the techniques not new, and has been widely used in economics and operational research for over three decades. The procedure is easily adoptable by researchers, since it is already supported by several statistical packages including Stata ( StataCorp., 2009 ) and Limdep ( Econometrica Software, Inc., 2009 ).

Response bias has long been a key issue in psychometrics, with response shift bias a particular concern in programme evaluation. However, almost all statistical attempts to address the issue have been confined to using SEM to test for response shift bias at the aggregate level. As noted in the introduction, our approach has three significant advantages over SEM techniques that try to measure response bias. SEM requires more data – multiple time periods and multiple measures, and measures bias only in the aggregate. SFE can identify bias with a single time period (although multiple observations are needed to identify bias recalibration) and identifies response biases across individuals. Perhaps the biggest advantage over SEM approaches is that SFE not only identifies bias but also provides information about the root causes of the bias. SFE allows simultaneously analysis about treatment effectiveness, causal factors of outcomes, and covariates to the bias, improving the statistical efficiency of the analysis over traditional SEM which often cannot identify causal factors and covariates to bias, and when it can, it requires two-step procedures. And since SFE allows the researcher to identify bias and causal factors at the individual level, it expands our ability to identify, understand, explain, and potentially correct for, response shift bias. Of course, bias at the individual level can be aggregated to measures comparable to what is learned through SEM approaches.

Acknowledgements

The authors would like to thank the anonymous referees. This study was supported in part by the National Institute of Drug Abuse (grants R21 DA025139-01Al and R21 DA19758-01). We thank the programme providers and families who participated in the programme evaluation.

Robert Rosenman is a Professor of Economics in the School of Economic Sciences at Washington State University. His current research aims to develop new approaches to measure of economic benefits of substance abuse prevention programmes. His research has appeared in journals such as the American Economic Review , Health Economics , Clinical Infectious Diseases and Health Care Management Science .

Vidhura Tennekoon is a Graduate student in the School of Economic Sciences at Washington State University. His research interests are in health economics, applied econometrics and prevention science with a current research focus in dealing with misclassification in survey data.

Laura G. Hill is a Psychologist and Associate Professor of Human Development at Washington State University. Her research focuses on the translation of evidence-based prevention programmes from research to practice and measurement of programme effectiveness in uncontrolled settings.

Reference to this paper should be made as follows: Rosenman, R., Tennekoon, V. and Hill, L.G. (2011). ‘Measuring bias in self-reported data’, Int. J. Behavioural and Healthcare Research , Vol. 2, No. 4/2011, pp. 320-332.

1 We present a single model that allows for pre- and post-intervention measurement of the outcome of interest and bias. If the self-reported data is not related to an intervention, β 0 and δ 0 (below) are identically 0 and there is only one time period, t .

2 Due to symmetry of the normal distribution, without loss of generality we can also assume that the bias distribution is right truncated.

3 When we tried to estimate the parameters of a model with one-sided errors upward the maximisation procedure failed to converge. A specification with one-sided errors upward but without a constant term converged, but a null hypothesis that there is a one-side error term was rejected with near certainty, indicating that there is no sizable upward response bias. A similar analysis but with the one-sided upward errors completely random (rather than dependent on treatment and other variables) was also rejected, again with near certainty. Thus, upward bias was robustly rejected.

Contributor Information

Robert Rosenman, School of Economic Sciences, Washington State University, P.O. Box 646210, Pullman, WA 99164-6210, USA.

Vidhura Tennekoon, School of Economic Sciences, Washington State University, P.O. Box 646210, Pullman, WA 99164-6210, USA ude.usw@aruhdiv .

Laura G. Hill, Department of Human Development, Washington State University, 523 Johnson Tower, Pullman WA 99164, USA ude.usw@lliharual .

  • Aigner D, Lovell CAK, Schmidt P. Formulation and estimation of stochastic frontier production function models. Journal of Econometrics. 1977; 6 (1):21–37. [ Google Scholar ]
  • Battese GE, Coelli TJ. A model for technical inefficiency effects in a stochastic frontier production function for panel data. Empirical Economics. 1995; 20 2 :325–332. [ Google Scholar ]
  • Econometrica Software, Inc. LIMDEP Version 9.0 [Computer Software] Econometrica Software, Inc.; Plainview, NY: 2009. [ Google Scholar ]
  • Howard GS. Response-shift bias: a problem in evaluating interventions with pre/post self-reports. Evaluation Review. 1980; 4 1 :93–106. DOI: 10.1177/0193841x8000400105. [ Google Scholar ]
  • Howard GS, Dailey PR. Response-shift bias: a source of contamination of self-report measures. Journal of Applied Psychology. 1979; 64 (2):144–150. [ Google Scholar ]
  • Kumpfer KL, Molgaard V, Spoth R. The strengthening families program for the prevention of delinquency and drug use. In: Peters RD, McMahon RJ, editors. Preventing Childhood Disorders, Substance Abuse, and Delinquency, Banff International Behavioral Science Series. Vol. 3. Sage Publications, Inc.; Thousand Oaks, CA, USA: 1996. pp. 241–267. [ Google Scholar ]
  • Meeusen W, van den Broeck J. Efficiency estimation from Cobb-Douglas production functions with composed error. International Economic Review. 1977; 18 2 :435–444. [ Google Scholar ]
  • Norman G. Hi! How are you? Response shift, implicit theories and differing epistemologies. Quality of Life Research. 2003; 12 3 :239–249. [ PubMed ] [ Google Scholar ]
  • Oort FJ. Using structural equation modeling to detect response shifts and true change. Quality of Life Research. 2005; 14 3 :587–598. [ PubMed ] [ Google Scholar ]
  • Oort FJ, Visser MRM, Sprangers MAG. An application of structural equation modeling to detect response shifts and true change in quality of life data from cancer patients undergoing invasive surgery. Quality of Life Research. 2005; 14 3 :599–609. [ PubMed ] [ Google Scholar ]
  • Pratt CC, McGuigan WM, Katzev AR. Measuring program outcomes: using retrospective pretest methodology. American Journal of Evaluation. 2000; 21 (3):341–349. [ Google Scholar ]
  • Schwartz CE, Sprangers MAG. Methodological approaches for assessing response shift in longitudinal health-related quality-of-life research. Social Science & Medicine. 1999; 48 (11):1531–1548. [ PubMed ] [ Google Scholar ]
  • Spoth R, Redmond C, Shin C. Direct and indirect latent-variable parenting outcomes of two universal family-focused preventive interventions: Extending a public health-oriented research base. Journal of Consulting and Clinical Psychology. 1998; 66 (2):385–399. DOI: 10.1037/0022-006x.66.2.385. [ PubMed ] [ Google Scholar ]
  • Spoth R, Redmond C, Haggerty K, Ward T. A controlled parenting skills outcome study examining individual difference and attendance effects. Journal of Marriage and Family. 1995; 57 (2):449–464. DOI: 10.2307/353698. [ Google Scholar ]
  • Sprangers M. Subject bias and the retrospective pretest in retrospect. Bulletin of the Psychonomic Society. 1989; 27 (1):11–14. [ Google Scholar ]
  • Sprangers M, Hoogstraten J. Pretesting effects in retrospective pretest-posttest designs. Journal of Applied Psychology. 1989; 74 (2):265–272. DOI: 10.1037/0021-9010.74.2.265. [ Google Scholar ]
  • StataCorp. Stata Statistical Software: Release 11 [Computer Software] StataCorp LP; College Station, TX: 2009. [ Google Scholar ]

Published on 

‘Landmark in survey research’: How the COVID States Project analyzed the pandemic with objectivity

Four years ago David Lazer formed the Northeastern-led effort — resulting in more than 100 cutting-edge reports and national media coverage.

psychological research self report

David Lazer ran into a fellow Northeastern University professor Alessandro Vespignani . It was February 2020. One month before the COVID-19 shutdowns.

“I said, ‘Tell me: How bad is it going to be?’” says Lazer, University Distinguished Professor of Political Science and Computer Sciences at Northeastern. “And he laid out how bad it would be.”

They were facing a life-changing event, warned Vespignani, director of the Network Science Institute and Sternberg Family Distinguished Professor at Northeastern. SARS-CoV-2, the virus that causes COVID-19, was spreading fast throughout the U.S. and beyond just three months after its emergence in Wuhan, China.

“He talked about how things were going to shut down over the following month and how there was going to be an indefinite time of having to modify our lives in order to protect ourselves individually and collectively,” Lazer recalls of that conversation. “He really got the broad parameters spot on.

“I obviously was quite distressed. I was thinking, ‘What can I do to contribute to the moment?’”

The answer would become known as the COVID States Project , a Northeastern-led effort by four universities that would analyze newly collected data in order to make sense of the evolving and volatile COVID-19 pandemic. 

Over the next four years the project would put out more than 100 reports — all relevant to urgent issues — that were reflected by media coverage across the country. 

Sharing their expertise across a variety of fields — computational social science, network science, public opinion polling, epidemiology, public health, psychiatry, communication and political science — the researchers framed and conducted surveys that enabled them to identify national and regional trends that influenced (and were influenced by) the spread of the virus.

“It was an act of improvisation — we didn’t know exactly what we were going to do,” Lazer says. “But we felt quite committed to having a positive impact and using our tools, our skill set, to do something during this horrible moment.”Built into their real-time research was an understanding that social behaviors would play a large role in a pandemic that has claimed close to 1.2 million lives nationally, according to the Centers for Disease Control and Prevention (though there is reason to believe many more people have died ).

Headshot of David Lazer.

The project’s surveys and reports reflected national moods and trends while also providing reliable information for policymakers at a time when the future was difficult to predict.

“David, being a political scientist, told me that he had this idea that a survey would be helpful,” says Mauricio Santillana , an original member of the COVID States Project who has since joined Northeastern as director of the Machine Intelligence Group for the betterment of Health and the Environment (MIGHTE) at the Network Science Institute. “I told him it was very appropriate because rather than seeing a population reaction to a public health crisis, the pandemic was evolving into a sociological problem — one where people were reacting more from their political views rather than scientific evidence.

“He had this idea of having a project where we could monitor people’s feelings, emotions and their changing behaviors in response to pronounced increases in COVID-19 infections and we could record their political affiliations,” adds Santillana, who was focused on mathematically modeling the pandemic. “The project became a really important tool for me to understand why things were getting worse and worse.”

Their work was based in objectivity — the need to respect all points of view while prioritizing understanding and dismissing judgment.

“By shedding light on things in a way that has visibility,” Lazer says, “one hopes that you are informing individual people who are reading about our stories in the media as well as policy elites about what decisions should be made.”

‘The best data out there’

It began with Lazer contacting colleagues at other universities. The COVID States Project became an effort coordinated by Lazer, Matthew Baum and Roy Perlis of Harvard, Katherine Ognyanova of Rutgers and James Druckman of Northwestern. Weekly meetings were held at 10 a.m. on Fridays as the project grew to include undergraduate and postdoctoral students — all contributing on a volunteer basis.

“We went out into the field in April and we started collecting data,” Lazer says. “We realized that we could get useful results for all 50 states. We could see the numbers pile up and that was an exciting moment, like, maybe this thing can actually work.”

Northeastern provided the startup funding (and many of the volunteers, and much of the person power, as authors of project reports included three postdoctoral fellows and six students from Northeastern). Additional financial support would come from the National Science Foundation, the National Institutes of Health and other supporters that enabled the project to grow and expand. The project’s work on COVID-19 is continuing even now.

“We’re still putting out data on vaccination rates and infection rates,” says Lazer, whose team relied on a third-party vendor for online surveys that represent a new frontier for public polling. “It turns out that our data are better than the official data, because the official data are seriously flawed in important ways.”

Those official numbers can be faulty because individual states have difficulty linking residents with the number of vaccinations they’ve received, says Lazer.

The COVID States Project team has learned how to not only frame questions with the precision to deal with relevant issues, but also to re-weigh the answers to provide representative analysis.

“If you want to know the vaccination rates of a given state, I think our data are the best data out there,” Lazer says. “It’s pretty mind-blowing that we have done 1,400 to 1,500 state-level surveys.”

Initial efforts were focused on understanding the basics of the pandemic. While all 50 states were developing plans to reopen for business in June 2020, the project found that most people preferred a more cautious approach, with only 15% of respondents favoring an immediate reopening.

“The project is a landmark in survey research,” says Alexi Quintana Mathé , a fourth-year Ph.D. student working with Lazer at Northeastern. “We surveyed more than 20,000 respondents roughly every month, with viable samples in every U.S. state and good representativity of the general population. This allowed us to closely monitor behaviors, opinions and consequences of the COVID-19 pandemic across the country with a special focus on differences by state, which were particularly relevant during the pandemic.”

Their work was able to show that Black people waited longer for test results than other people in the U.S.

“It’s important to illuminate and create accountability,” Lazer says.

The project’s tracking of social distance behaviors in October 2020 helped predict which states would experience surges the following month. 

A survey in summer 2020 accurately predicted the rates of people who would submit to vaccinations when the shots became available that December. Another survey was able to show which demographic groups would be reluctant to be vaccinated.

“The team found that concerns over vaccine safety, as well as distrust, were key reasons [for reluctance],” says Kristin Lunz Trujillo, now a University of South Carolina assistant professor of political science who worked on the COVID States Project as a Northeastern postdoctoral fellow. “This report sparked a lot of other ongoing work on the project and gave a fuller picture of COVID vaccine hesitancy than what our typical survey measures provided.”

“People still needed to be convinced, and I think that was a very natural response,” says Santillana, a Northeastern professor of physics and electrical and computer engineering. “The fact that people were concerned about their health when being exposed to a vaccine is a natural thing. But that was being interpreted as, ‘Oh, then you are a denier.’ There was no room to be a normal person who wants to learn as we experience things. For me, being a mathematician and physicist and hearing my political-scientist colleagues discussing issues of trust in medical research and medical professionals, it became a multidisciplinary learning experience.”

A constructive role by academia

In the midst of their COVID-19 work, the researchers delved into other major U.S. events. They were able to identify the demographics of the widespread Black Lives Matter protests that followed the May 2020 murder of George Floyd. And they were able to show that those outdoor protests did not result in upsurges of pandemic-related illness.

“The diverse expertise of scientists on the project meant that we could investigate public health issues both broadly and deeply,” says Alauna Safarpour, a Northeastern postdoctoral contributor to the project who now serves as assistant professor of political science at Gettysburg College. “We not only analyzed misinformation related to the pandemic, vaccine skepticism and depression/mental health concerns, but also abortion attitudes, support for political violence and even racism as a public health concern.”

In anticipation of the role that mail-in ballots would play in the 2020 election, the project anticipated which state results would change as the late-arriving votes were counted.

“We had a piece predicting the shift after Election Day,” Lazer says. “We said there’s going to be a shift towards Biden in some states and it will be a very large shift — and we got the states right, we got the estimates right. 

“We were trying to prepare people that there was nothing fishy going on here. That this is what is expected.”

After the insurrection of Jan. 6, 2021, the project predicted accurately that Donald Trump would retain his influence as leader of the Republic Party.

“There were a lot of people right after Jan. 6 who said Trump is finished,” Lazer says. “We went into the field a couple of days later, did a survey and we said, ‘The [typical] Republican believes the election was stolen and says Trump’s endorsement would still matter a lot.’” 

The Supreme Court’s overturning of Roe v. Wade in June 2022 was followed by a COVID States Project report accurately forecasting a Democratic backlash .

“There’s a story here around the constructive role that academia can play in moments of crisis — the tools that we have are really quite practical,” Lazer says. “As the information ecosystem of our country has diminished — we see the news media firing people left and right — there is a role for universities to take some of that capacity for creating knowledge and translating that to help with the crises of the day.”

Next up: CHIP50

“It uncovered the impact of the social and political changes that Americans went through over the last four years at the national level, but more importantly it broke down the findings to demographic and regional groups,” says Ata Aydin Uslu , a third-year Ph.D. student at the Lazer Lab at Northeastern. “I see CSP as a successful attempt to mic up the American public. We enabled Americans to make their point to the local and federal decision-makers, and the decision-makers to make informed decisions and resource allocations — something that was of utmost importance during a once-in-a-century crisis.”

Entering its fifth year, the project is taking on an identity to reflect the changing times. The newly named Civic Health and Institutions Project, a 50 States Survey (CHIP50) is building on the lessons learned by the COVID States Project team during the pandemic.

“The idea is to institutionalize the notion of doing 50 state surveys in a federal country,” Lazer says. “We have this perspective on states that no other research ever has.”

Their ongoing work will include competitions to add questions from outside scholars, Lazer says. “We’re still going to issue reports, but less often, and we’re going to be turning more to scholarship while still trying to get that translational element of what does this mean, what people should think, what policymakers should do and so on.”

During a recent interview, as Lazer is recounting the work of the past four years via a Zoom call, his head is bobbing back and forth. When the pandemic forced him to isolate, he explains, he made a habit of working while walking a treadmill in his attic. At times he was responding to the pressures of the pandemic by working 16 hours while logging 40,000 steps daily — and developing plantar fasciitis along the way.

“All of this has made me think much more about the underlying sociological and psychological realities of how people process information — and the role that trust in particular plays,” Lazer says. “It has really shaped my thinking about what is core in understanding politics.”

Ian Thomsen is a Northeastern Global News reporter. Email him at [email protected] . Follow him on X/Twitter @IanatNU .

  • Copy Link Link Copied!

Forced labor in the clothing industry is rampant and hidden. This AI-powered search platform can expose it.

Supply Trace combines AI with on-the-ground investigation to trace apparel to regions with a high risk of forced labor.

An illustration of a person holding up a phone displaying a database while workers sit at sewing machines in the background.

This Northeastern graduate thinks you’d be perfect for ‘The Amazing Race’

As a casting professional, Alex Sharp has unearthed talent for reality competition shows including ‘Lego Masters’ and ‘Chopped.’

Headshot of Alex Sharp.

Mike Sirota, rated as a high first-round pick, could be Northeastern’s greatest baseball player

Sirota, a 6-foot-3-inch junior, is ranked among the top 11 picks overall in the upcoming Major League Baseball Draft.

Mike Sirota up at bat.

We can’t combat climate change without changing minds. This psychology class explores how.

PSYC-4660: Humans & Nature is part of a larger push at Northeastern to explore the intersection between environmental and brain sciences.

Illustration of two human heads with trees growing inside of them next to pipes emitting gases into the air.

Daniel Gwynn sat on death row for 30 years. Meet the Northeastern law grad who helped set him free

“We’re lifting up the humanity of our clients,” says Gretchen Engel, a 1992 graduate of Northeastern School of Law.

Daniel Gwynn posing with four other people.

ScienceDaily

Researchers map how the brain regulates emotions

Study identifies multiple emotion regulation systems, providing targets for therapy.

Ever want to scream during a particularly bad day, but then manage not to? Thank the human brain and how it regulates emotions, which can be critical for navigating everyday life. As we perceive events unfolding around us, the ability to be flexible and reframe a situation impacts not only how we feel, but also our behavior and decision-making.

In fact, some of the problems associated with mental health relate to individuals' inability to be flexible, such as when persistent negative thoughts make it hard to perceive a situation differently.

To help address such issues, a new Dartmouth-led study is among the first of its kind to separate activity relating to emotion generation from emotion regulation in the human brain. The findings are published in Nature Neuroscience .

"As a former biomedical engineer, it was exciting to identify some brain regions that are purely unique to regulating emotions," says lead author Ke Bo, a postdoctoral researcher in the Cognitive and Affective Neuroscience Lab (CANlab) at Dartmouth. "Our results provide new insight into how emotion regulation works by identifying targets which could have clinical applications."

For example, the systems the researchers identified could be good targets for brain stimulation to enhance the regulation of emotion.

Using computational methods, the researchers examined two independent datasets of fMRI studies obtained earlier by co-author Peter Gianaros at the University of Pittsburgh. Participants' brain activity was recorded in an fMRI scanner as they viewed images that were likely to draw a negative reaction such as a bloody scene or scary- looking animals.

The participants were then asked to recontextualize the stimulus by generating new kinds of thoughts about an image to make it less aversive, before a neutral image was presented followed by another dislikable image.

By examining the neural activity, researchers could identify the brain areas that are more active when emotions are regulated versus when emotions are generated.

The new study reveals that emotion regulation, also known in neuroscience as "reappraisal," involves particular areas of the anterior prefrontal cortex and other higher-level cortical hierarchies whose role in emotion regulation had not previously been isolated with this level of precision. These regions are involved in other high-level cognitive functions and are important for abstract thought and long-term representations of the future.

The more people are able to activate these emotion regulation-selective brain regions, the more resilient they are to experiencing something negative without letting it affect them personally. These findings build on other research linking these areas to better mental health and the ability to resist temptations and avoid drug addiction.

The results also demonstrated that the amygdala, which is known as the threat-related brain region responsible for negative emotion and has long been considered an ancient subcortical threat center, responds to aversive experiences the same way, whether people are using their thoughts to self-regulate down-regulate negative emotion or not. "It's really the cortex that is responsible for generating people's emotional responses, by changing the way we see and attach meaning to events in our environments," says Bo.

The researchers were also interested in identifying the neurochemicals that interact with emotion regulation systems. Neurotransmitters like dopamine and serotonin shape how networks of neurons communicate and are targets for both illicit drugs and therapeutic treatments alike. Some neurotransmitters may be important for enabling the ability to self-regulate or "down-regulate."

The team compared the emotion regulation brain maps from the two datasets to neurotransmitter binding maps from 36 other studies. The systems involved in regulating negative emotion overlapped with particular neurotransmitter systems.

"Our results showed that receptors for cannabinoids, opioids, and serotonin, including 5H2A, were especially rich in areas that are involved in emotion regulation," says senior author Tor Wager, the Diana L. Taylor Distinguished Professor in Neuroscience and director of the Dartmouth Brain Imaging Center at Dartmouth. "When drugs that bind to these receptors are taken, they are preferentially affecting the emotion regulation system, which raises questions about their potential for long-term effects on our capacity to self-regulate."

Serotonin is well-known for its role in depression, as the most widely used antidepressant drugs inhibit its reuptake in synapses, which transmit signals from one neuron to another.

5H2A is the serotonin receptor most strongly affected by another exciting new type of treatment for mental health -- psychedelic drugs. The study's findings suggest that the effects of drugs on depression and other mental health disorders may work in part by altering how we think about life events and our ability to self-regulate. This may help explain why drugs, particularly psychedelics, are likely to be ineffective without the right kind of psychological support. The study could help improve therapeutic approaches by increasing our understanding of why and how psychological and pharmaceutical approaches need to be combined into integrated treatments.

"It's important to consider these types of connections that come from basic science," says Wager. "Understanding drug effects requires understanding the brain systems involved and what they're doing at a cognitive level."

  • Nervous System
  • Psychology Research
  • Mental Health Research
  • Brain Tumor
  • Intelligence
  • Neuroscience
  • Brain-Computer Interfaces
  • Disorders and Syndromes
  • Encephalopathy
  • Drug addiction
  • Psychologist
  • Limbic system

Story Source:

Materials provided by Dartmouth College . Original written by Amy Olson. Note: Content may be edited for style and length.

Journal Reference :

  • Ke Bo, Thomas E. Kraynak, Mijin Kwon, Michael Sun, Peter J. Gianaros, Tor D. Wager. A systems identification approach using Bayes factors to deconstruct the brain bases of emotion regulation . Nature Neuroscience , 2024; DOI: 10.1038/s41593-024-01605-7

Cite This Page :

Explore More

  • 'Rainbow' Detected On an Exoplanet
  • Spears and Throwing Sticks 300,000 Years Old
  • High Carbon Impact of Tourism at Yellowstone
  • Extreme Starburst Galaxy
  • Asthma: Disease May Be Stoppable
  • Stellar Collisions and Zombie-Like Survivors
  • Tiny Robot Swarms Inspired by Herd Mentality
  • How the Brain Regulates Emotions
  • Evolution in Action? Nitrogen-Fixing Organelles
  • Plastic-Free Vegan Leather That Dyes Itself

Trending Topics

Strange & offbeat.

IMAGES

  1. FREE 10+ Sample Psychological Reports in PDF

    psychological research self report

  2. Simple How To Write A Lab Report Psychology Example About Marketing

    psychological research self report

  3. (PDF) Self-Report Methods

    psychological research self report

  4. School Psychologist Report Template

    psychological research self report

  5. The Schutte Self Report Emotional Intelligence Test

    psychological research self report

  6. (PDF) Medical students' self-report of mental health conditions

    psychological research self report

VIDEO

  1. Self Report in Psychology

  2. Psychological Research

  3. 16 Personality Types

  4. What Is A SELF AWARE NARCISSIST? #psychology #selfimprovement #selfawareness #npd #npc #bpd #aspd

  5. personal improvement

  6. Heartland Forgiveness Scale

COMMENTS

  1. The Use of Self-Report Data in Psychology

    Self-report data is gathered typically in paper-and-pencil or electronic format or sometimes through an interview. Self-reporting is commonly used in psychological studies because it can yield valuable and diagnostic information to a researcher or a clinician. This article explores examples of how self-report data is used in psychology.

  2. The Science of Self-Report

    The issue of self-report as a primary tool in research and as an unavoidable component in health care is of central concern to medical and social science researchers and medical and psychology practitioners, and many other scientists. Drawing on the expertise of disciplines ranging from anthropology to sociology, the conference's 32 speakers ...

  3. Assessing Psychological Well-Being: Self-Report Instruments for the NIH

    Introduction. Research on psychological well-being (PWB) has received increasing attention over the past decade in part due to the growth of the positive psychology movement [] and renewed interest on the relationship between positive psychology and health [].Research examining the relationship between PWB and health has primarily been focused on positive affect and health and has revealed ...

  4. Self-Reports

    Self-reports have been widely used in evolutionary psychology research targeting many domains (e.g., mate preference, tactics of deception, cooperation, and helping). While researchers using self-reports have been able to collect and examine data that would otherwise be unobtainable (e.g., sexual fantasies), the use of self-reports in ...

  5. Self-Report Measures

    Chapter 5 explores self-report (SR) measures in treatment research. It discusses types of SRs, quality of SRs (reliability, validity, sensitivity and specificity in classification, utility), selecting SR measures for outcome research, and response distortions.

  6. Self‐Report Methods

    Summary. This chapter discusses procedures for constructing self-report methods, such as interviews and questionnaires. The advantages of self-report are that it gives the person's own perspective, and that there is no other way to access the person's own experience. There are both qualitative and quantitative approaches to self-report.

  7. The Science of Self-report

    This book presents cutting-edge research on optimal methods for obtaining self-reported information for use in the evaluation of scientific hypothesis, in therapeutic interventions, and in the development of prognostic indicators. Self-reports constitute critically important data for research and practice in many fields.

  8. Self-Report Tests, Measures, and Inventories in Clinical Psychology

    These standards provide information on the foundations of psychological testing, best practices in operations, and information on applying testing information. Ben-Porath, Y. S. 2012. Self-report inventories: Assessing personality and psychopathology. In Handbook of psychology.

  9. What is it like to be the object of research? On meaning making in self

    1. Introduction. In the era of evidence-based psychology, measurement is considered the cornerstone of research (McClimans, 2013).Self-report measurement 1 is one of the most used means of collecting data on psychological variables, applied in the whole range from baseline to calculation of efficacy or change (McLeod, 2001; Hyde, 2004).Self-report measures are generally used to assess ...

  10. Why Are Self-Report and Behavioral Measures Weakly Correlated?

    The weak correlations between self-report and behavioral measures of the presumed same construct result from the poor reliability of many behavioral measures and the distinct response processes involved in these two measurement types. We suggest that only measures with high reliability be used for individual difference research, whereas measure ...

  11. On Self Reports

    Psychological-Reports, 34 (3), 1184-1186. Carpenter, S (2000). A taste expert sniffs out a long-standing measurement oversight. Monitor on Psychology, 31, 20-21. ... P E (1994). Using self-report questionnaires in OB research: A comment on the use of a controversial method. Journal of Organizational Behavior, 15, 385-392. Self Reports and ...

  12. Comparing the reliability and validity of global self-report measures

    This heterogeneity may have reduced the magnitude of associations between self- and observer-reports. Future research could explore whether the source of observer ratings moderates the association between self and observer reports. ... the accurate assessment of well-being is a critical issue for psychological science—both in terms of basic ...

  13. Bias in Self-Reports: An Initial Elevation Phenomenon

    We observed an initial elevation on self-reports of negative subjective experiences such as mood and mental and physical health symptoms. Our findings show that the threats to validity posed by the phenomenon are real and need to be reckoned with.

  14. Do we still need psychological self-report questionnaires in ...

    Beyond this, self-report data without doubt, has proven to yield valid research: if we stick with the example of personality, it has been shown that psychometrically sound measured personality (via self-report) predicted or was associated with important real world outcomes (e.g., diet behavior, financial decision making, online social media use ...

  15. Assessing psychological well-being: Self-report instruments for the NIH

    N1 - Funding Information: Acknowledgments This project was funded in whole or in part with federal funds from the Blueprint for Neuroscience Research and the Office of Behavioral and Social Sciences Research, National Institutes of Health, under Contract No. HHS-N-260-2006-00007-C. Preparation of this manuscript was supported in part by NIH ...

  16. The Science of Self-Report

    The Science of Self-Report. January 01, 1997. BETHESDA, MARYLAND-The accuracy and reliability of reports of one's own behavior and physical state are at the root of effective medical practice and valid research in health and psychology. To address this crucial element of research, the National Institutes of Health (NIH) held an informative ...

  17. (PDF) Self‐Report Questionnaires

    Abstract. The self-report questionnaire is one of the most widely used assessment strategies in clinical psychology. It consists of a set of written questions used for describing certain qualities ...

  18. A systematic review of the psychometric properties of self-report

    In healthcare, a gap exists between what is known from research and what is practiced. Understanding this gap depends upon our ability to robustly measure research utilization. The objectives of this systematic review were: to identify self-report measures of research utilization used in healthcare, and to assess the psychometric properties (acceptability, reliability, and validity) of these ...

  19. Self-report.

    Citation. Krueger, R. F., & Kling, K. C. (2000). Self-report. In A. E. Kazdin (Ed.), Encyclopedia of psychology (Vol. 7, pp. 220-224). Oxford University Press.

  20. Cognitive psychology and self-reports: Models and methods

    This article describes the models and methods that cognitive psychologists and survey researchers use to evaluate and experimentally test cognitive issues in questionnaire design and subsequently improve self-report instruments. These models and methods assess the cognitive processes underlying how respondents comprehend and generate answers to self-report questions. Cognitive processing ...

  21. Self-Report Techniques

    Share : Self-report techniques describe methods of gathering data where participants provide information about themselves without interference from the experimenter. Such techniques can include questionnaires, interviews, or even diaries, and ultimately will require giving responses to pre-set questions. Evaluation of self-report methods.

  22. When are psychologists most at risk of burnout?

    Early and midcareer psychologists consistently reported higher levels of stress than senior and late senior career psychologists, according to annual APA practitioner surveys conducted in 2021 through 2023. 1,2 In fact, psychologists' average self-reported stress levels decreased as psychologists' careers advanced. 3 Similarly, earlier career psychologists were consistently more likely to ...

  23. Psychological empowerment and exercising: The relationships between

    In summary, the study utilized a number of psychological theories and models to demonstrate the relationships between self-stereotyping, exercising and measures of women's psychological empowerment. Such exploratory research could have the power to inform policies, both at the national and individual levels.

  24. Frontiers

    Few studies, however, explored specific sections of research articles to uncover how self-mentions function within each section's conventions. Exploring the use of self-mention markers, the present study aimed at comparing the method sections written by native English writers and L-1 Persian writers in the field of psychology.

  25. Measuring bias in self-reported data

    Response bias is a widely discussed phenomenon in behavioural and healthcare research where self-reported data are used; it occurs when individuals offer self-assessed measures of some phenomenon. There are many reasons individuals might offer biased estimates of self-assessed behaviour, ranging from a misunderstanding of what a proper ...

  26. Teens are spending nearly 5 hours daily on social media. Here are the

    41%. Percentage of teens with the highest social media use who rate their overall mental health as poor or very poor, compared with 23% of those with the lowest use. For example, 10% of the highest use group expressed suicidal intent or self-harm in the past 12 months compared with 5% of the lowest use group, and 17% of the highest users expressed poor body image compared with 6% of the lowest ...

  27. Depression and posttraumatic stress disorder in adolescents with

    Background: Nonsuicidal self-injury (NSSI) combined with suicide ideation increases the risk of suicidal behaviors. Depression and posttraumatic stress disorder (PTSD) are comorbidities of NSSI compounding this relationship. The present study compared diagnostic subgroups of NSSI based on current depression and PTSD on psychological correlates (i.e., vulnerabilities and impairment) and ...

  28. 'Landmark in survey research': How the COVID States Project analyzed

    Four years and more than 100 reports ago David Lazer formed the COVID States Project to analyze the pandemic with objectivity. ... "The project is a landmark in survey research," says Alexi ... "All of this has made me think much more about the underlying sociological and psychological realities of how people process information — and ...

  29. Researchers map how the brain regulates emotions

    The study's findings suggest that the effects of drugs on depression and other mental health disorders may work in part by altering how we think about life events and our ability to self-regulate.