TOPICS A. Fill-in-the-Blank Items B. Essay Questions C. Scoring Options


Extended Response

Extended responses can be much longer and complex then short responses, but students should be encouraged to remain focused and organized. On the FCAT, students have 14 lines for each answer to an extended response item, and they are advised to allow approximately 10-15 minutes to complete each item. The FCAT extended responses are scored using a 4-point scoring rubric. A complete and correct answer is worth 4 points. A partial answer is worth 1, 2, or 3 points.


  • CRLT Consultation Services
  • Consultation
  • Midterm Student Feedback
  • Classroom Observation
  • Teaching Philosophy
  • Upcoming Events and Seminars
  • CRLT Calendar
  • Orientations
  • Teaching Academies
  • Provost's Seminars
  • Past Events
  • For Faculty
  • For Grad Students & Postdocs
  • For Chairs, Deans & Directors
  • Customized Workshops & Retreats
  • Assessment, Curriculum, & Learning Analytics Services
  • CRLT in Engineering
  • CRLT Players
  • Foundational Course Initiative
  • CRLT Grants
  • Other U-M Grants
  • Provost's Teaching Innovation Prize
  • U-M Teaching Awards
  • Retired Grants
  • Staff Directory
  • Faculty Advisory Board
  • Annual Report
  • Equity-Focused Teaching
  • Preparing to Teach
  • Teaching Strategies
  • Testing and Grading
  • Teaching with Technology
  • Teaching Philosophy & Statements
  • Training GSIs
  • Evaluation of Teaching
  • Occasional Papers


Best Practices for Designing and Grading Exams

Adapted from crlt occasional paper #24: m.e. piontek (2008), center for research on learning and teaching.

The most obvious function of assessment methods (such as exams, quizzes, papers, and presentations) is to enable instructors to make judgments about the quality of student learning (i.e., assign grades). However, the method of assessment also can have a direct impact on the quality of student learning. Students assume that the focus of exams and assignments reflects the educational goals most valued by an instructor, and they direct their learning and studying accordingly  (McKeachie  & Svinicki, 2006). General grading systems can have an impact as well.  For example, a strict bell curve (i.e., norm-reference grading) has the potential to dampen motivation and cooperation in a classroom, while a system that strictly rewards proficiency (i.e., criterion-referenced grading ) could be perceived as contributing to grade inflation. Given the importance of assessment for both faculty and student interactions about learning, how can instructors develop exams that provide useful and relevant data about their students' learning and also direct students to spend their time on the important aspects of a course or course unit? How do grading practices further influence this process?

Guidelines for Designing Valid and Reliable Exams

Ideally, effective exams have four characteristics. They are:

  • Valid, (providing useful information about the concepts they were designed to test),
  • Reliable (allowing consistent measurement and discriminating between different levels of performance),
  • Recognizable   (instruction has prepared students for the assessment), and
  • Realistic (concerning time and effort required to complete the assignment)  (Svinicki, 1999). 

Most importantly, exams and assignments should f ocus on the most important content and behaviors emphasized during the course (or particular section of the course). What are the primary ideas, issues, and skills you hope students learn during a particular course/unit/module? These are the learning outcomes you wish to measure. For example, if your learning outcome involves memorization, then you should assess for memorization or classification; if you hope students will develop problem-solving capacities, your exams should focus on assessing students’ application and analysis skills.  As a general rule, assessments that focus too heavily on details (e.g., isolated facts, figures, etc.) “will probably lead to better student retention of the footnotes at the cost of the main points" (Halpern & Hakel, 2003, p. 40). As noted in Table 1, each type of exam item may be better suited to measuring some learning outcomes than others, and each has its advantages and disadvantages in terms of ease of design, implementation, and scoring.

Table 1: Advantages and Disadvantages of Commonly Used Types of Achievement Test Items

Adapted from Table 10.1 of Worthen, et al., 1993, p. 261.

General Guidelines for Developing Multiple-Choice and Essay Questions

The following sections highlight general guidelines for developing multiple-choice and essay questions, which are often used in college-level assessment because they readily lend themselves to measuring higher order thinking skills  (e.g., application, justification, inference, analysis and evaluation).  Yet instructors often struggle to create, implement, and score these types of questions (McMillan, 2001; Worthen, et al., 1993).

Multiple-choice questions have a number of advantages. First, they can measure various kinds of knowledge, including students' understanding of terminology, facts, principles, methods, and procedures, as well as their ability to apply, interpret, and justify. When carefully designed, multiple-choice items also can assess higher-order thinking skills.

Multiple-choice questions are less ambiguous than short-answer items, thereby providing a more focused assessment of student knowledge. Multiple-choice items are superior to true-false items in several ways: on true-false items, students can receive credit for knowing that a statement is incorrect, without knowing what is correct. Multiple-choice items offer greater reliability than true-false items as the opportunity for guessing is reduced with the larger number of options. Finally, an instructor can diagnose misunderstanding by analyzing the incorrect options chosen by students.

A disadvantage of multiple-choice items is that they require developing incorrect, yet plausible, options that can be difficult to create. In addition, multiple- choice questions do not allow instructors to measure students’ ability to organize and present ideas.  Finally, because it is much easier to create multiple-choice items that test recall and recognition rather than higher order thinking, multiple-choice exams run the risk of not assessing the deep learning that many instructors consider important (Greenland & Linn, 1990; McMillan, 2001).

Guidelines for writing multiple-choice items include advice about stems, correct answers, and distractors (McMillan, 2001, p. 150; Piontek, 2008):

  • S tems pose the problem or question.
  • Is the stem stated as clearly, directly, and simply as possible?
  • Is the problem described fully in the stem?
  • Is the stem stated positively, to avoid the possibility that students will overlook terms like “no,” “not,” or “least”?
  • Does the stem provide only information relevant to the problem?

Possible responses include the correct answer and distractors , or the incorrect choices. Multiple-choice questions usually have at least three distractors.

  • Are the distractors plausible to students who do not know the correct answer?
  • Is there only one correct answer?
  • Are all the possible answers parallel with respect to grammatical structure, length, and complexity?
  • Are the options short?
  • Are complex options avoided? Are options placed in logical order?
  • Are correct answers spread equally among all the choices? (For example, is answer “A” correct about the same number of times as options “B” or “C” or “D”)?

An example of good multiple-choice questions that assess higher-order thinking skills is the following test question from pharmacy (Park, 2008):

Patient WC was admitted for third-degree burns over 75% of his body. The attending physician asks you to start this patient on antibiotic therapy.  Which one of the following is the best reason why WC would need antibiotic prophylaxis? a. His burn injuries have broken down the innate immunity that prevents microbial invasion. b. His injuries have inhibited his cellular immunity. c. His injuries have impaired antibody production. d. His injuries have induced the bone marrow, thus activated immune system

A second question builds on the first by describing the patient’s labs two days later, asking the students to develop an explanation for the subsequent lab results. (See Piontek, 2008 for the full question.)

Essay questions can tap complex thinking by requiring students to organize and integrate information, interpret information, construct arguments, give explanations, evaluate the merit of ideas, and carry out other types of reasoning  (Cashin, 1987; Gronlund & Linn, 1990; McMillan, 2001; Thorndike, 1997; Worthen, et al., 1993). Restricted response essay questions are good for assessing basic knowledge and understanding and generally require a brief written response (e.g., “State two hypotheses about why birds migrate.  Summarize the evidence supporting each hypothesis” [Worthen, et al., 1993, p. 277].) Extended response essay items allow students to construct a variety of strategies, processes, interpretations and explanations for a question, such as the following:

The framers of the Constitution strove to create an effective national government that balanced the tension between majority rule and the rights of minorities. What aspects of American politics favor majority rule? What aspects protect the rights of those not in the majority? Drawing upon material from your readings and the lectures, did the framers successfully balance this tension? Why or why not? (Shipan, 2008).

In addition to measuring complex thinking and reasoning, advantages of essays include the potential for motivating better study habits and providing the students flexibility in their responses.  Instructors can evaluate how well students are able to communicate their reasoning with essay items, and they are usually less time consuming to construct than multiple-choice items that measure reasoning.

The major disadvantages of essays include the amount of time instructors must devote to reading and scoring student responses, and  the importance of developing and using carefully constructed criteria/rubrics to insure reliability of scoring. Essays can assess only a limited amount of content in one testing period/exam due to the length of time required for students to respond to each essay item. As a result, essays do not provide a good sampling of content knowledge across a curriculum (Gronlund & Linn, 1990; McMillan, 2001).

Guidelines for writing essay questions include the following (Gronlund & Linn, 1990; McMillan, 2001; Worthen, et al., 1993):

  • Restrict the use of essay questions to educational outcomes that are difficult to measure using other formats. For example, to test recall knowledge, true-false, fill-in-the-blank, or multiple-choice questions are better measures.
  • Generalizations : State a set of principles that can explain the following events.
  • Synthesis : Write a well-organized report that shows…
  • Evaluation : Describe the strengths and weaknesses of…
  • Write the question clearly so that students do not feel that they are guessing at “what the instructor wants me to do.”
  • Indicate the amount of time and effort students should spend on each essay item.
  • Avoid giving students options for which essay questions they should answer. This choice decreases the validity and reliability of the test because each student is essentially taking a different exam.
  • Consider using several narrowly focused questions (rather than one broad question) that elicit different aspects of students’ skills and knowledge.
  • Make sure there is enough time to answer the questions.

Guidelines for scoring essay questions include the following (Gronlund & Linn, 1990; McMillan, 2001; Wiggins, 1998; Worthen, et al., 1993; Writing and grading essay questions , 1990):

  • Outline what constitutes an expected answer.
  • Select an appropriate scoring method based on the criteria. A rubric is a scoring key that indicates the criteria for scoring and the amount of points to be assigned for each criterion.  A sample rubric for a take-home history exam question might look like the following:

For other examples of rubrics, see CRLT Occasional Paper #24  (Piontek, 2008).

  • Clarify the role of writing mechanics and other factors independent of the educational outcomes being measured. For example, how does grammar or use of scientific notation figure into your scoring criteria?
  • Create anonymity for students’ responses while scoring and create a random order in which tests are graded (e.g., shuffle the pile) to increase accuracy of the scoring.
  • Use a systematic process for scoring each essay item.  Assessment guidelines suggest scoring all answers for an individual essay question in one continuous process, rather than scoring all answers to all questions for an individual student. This system makes it easier to remember the criteria for scoring each answer.

You can also use these guidelines for scoring essay items to create grading processes and rubrics for students’ papers, oral presentations, course projects, and websites.  For other grading strategies, see Responding to Student Writing – Principles & Practices and Commenting Effectively on Student Writing .

Cashin, W. E. (1987). Improving essay tests . Idea Paper, No. 17. Manhattan, KS: Center for Faculty Evaluation and Development, Kansas State University.

Gronlund, N. E., & Linn, R. L. (1990). Measurement and evaluation in teaching   (6th  ed.). New  York:  Macmillan Publishing Company.

Halpern, D. H., & Hakel, M. D. (2003). Applying the science of learning to the university and beyond. Change, 35 (4), 37-41.

McKeachie, W. J., & Svinicki, M. D. (2006). Assessing, testing, and evaluating: Grading is not the most important function.   In   McKeachie's   Teaching tips: Strategies, research, and theory for college and university teachers (12th ed., pp. 74-86). Boston: Houghton Mifflin Company.

McMillan, J. H. (2001).  Classroom assessment: Principles and practice for effective instruction.  Boston: Allyn and Bacon.

Park, J. (2008, February 4). Personal communication. University of Michigan College of Pharmacy.

Piontek, M. (2008). Best practices for designing and grading exams. CRLT Occasional Paper No. 24 . Ann Arbor, MI. Center for Research on Learning and Teaching.>

Shipan, C. (2008, February 4). Personal communication. University of Michigan Department of Political Science.

Svinicki, M.   D.   (1999a). Evaluating and grading students.  In Teachers and students: A sourcebook for UT- Austin faculty (pp. 1-14). Austin, TX: Center for Teaching Effectiveness, University of Texas at Austin.

Thorndike, R. M. (1997). Measurement and evaluation in psychology and education.   Upper Saddle River, NJ: Prentice-Hall, Inc.

Wiggins, G. P. (1998). Educative assessment: Designing assessments to inform and improve student performance . San Francisco: Jossey-Bass Publishers.

Worthen, B.  R., Borg, W.  R.,  & White, K.  R.  (1993). Measurement and evaluation in the schools .  New York: Longman.

Writing and grading essay questions. (1990, September). For Your Consideration , No. 7. Chapel Hill, NC: Center for Teaching and Learning, University of North Carolina at Chapel Hill.

back to top

Center for Research on Learning and Teaching logo

Contact CRLT

location_on University of Michigan 1071 Palmer Commons 100 Washtenaw Ave. Ann Arbor, MI 48109-2218

phone Phone: (734) 764-0505

description Fax: (734) 647-3600

email Email: [email protected]

Connect with CRLT


directions Directions to CRLT

group Staff Directory

markunread_mailbox Subscribe to our Blog

Utilizing Extended Response Items to Enhance Student Learning

  • An Introduction to Teaching
  • Tips & Strategies
  • Policies & Discipline
  • Community Involvement
  • School Administration
  • Technology in the Classroom
  • Teaching Adult Learners
  • Issues In Education
  • Teaching Resources
  • Becoming A Teacher
  • Assessments & Tests
  • Elementary Education
  • Secondary Education
  • Special Education
  • Homeschooling
  • M.Ed., Educational Administration, Northeastern State University
  • B.Ed., Elementary Education, Oklahoma State University

"Extended response items" have traditionally been called "essay questions." An extended response item is an open-ended question that begins with some type of prompt. These questions allow students to write a response that arrives at a conclusion based on their specific knowledge of the topic. An extended response item takes considerable time and thought. It requires students not only to give an answer but also to explain the answer with as much in-depth detail as possible. In some cases, students not only have to give an answer and explain the answer, but they also have to show how they arrived at that answer.

Teachers love extended response items because they require students to construct an in-depth response that proves mastery or lack thereof. Teachers can then utilize this information to reteach gap concepts or build upon individual student strengths. Extended response items require students to demonstrate a higher depth of knowledge than they would need on a multiple choice item. Guessing is almost completely eliminated with an extended response item. A student either knows the information well enough to write about it or they do not. Extended response items also are a great way to assess and teach students grammar and writing. Students must be strong writers as an extended response item also tests a student's ability to write coherently and grammatically correct.

Extended response items require essential critical thinking skills. An essay, in a sense, is a riddle that students can solve using prior knowledge, making connections, and drawing conclusions. This is an invaluable skill for any student to have. Those who can master it have a better chance of being successful academically.  Any student who can successfully solve problems and craft well-written explanations of their solutions will be at the top of their class. 

Extended response items do have their shortcomings. They are not teacher friendly in that they are difficult to construct and score. Extended response items take a lot of valuable time to develop and grade. Additionally, they are difficult to score accurately. It can become difficult for teachers to remain objective when scoring an extended response item. Each student has a completely different response, and teachers must read the entire response looking for evidence that proves mastery. For this reason, teachers must develop an accurate rubric and follow it when scoring any extended response item.

An extended response assessment takes more time for students to complete than a multiple choice assessment . Students must first organize the information and construct a plan before they can actually begin responding to the item. This time-consuming process can take multiple class periods to complete depending on the specific nature of the item itself.

Extended response items can be constructed in more than one way. It can be passage-based, meaning that students are provided with one or more passages on a specific topic. This information can help them formulate a more thoughtful response. The student must utilize evidence from the passages to formulate and validate their response on the extended response item. The more traditional method is a straightforward, open-ended question on a topic or unit that has been covered in class. Students are not given a passage to assist them in constructing a response but instead must draw from memory their direct knowledge on the topic.

Teachers must remember that formulating a well written extended response is a skill in itself. Though they can be a great assessment tool, teachers must be prepared to spend the time to teach students how to write a formidable essay . This is not a skill that comes without hard work. Teachers must provide students with the multiple skills that are required to write successfully including sentence and paragraph structure, using proper grammar, pre-writing activities, editing, and revising. Teaching these skills must become part of the expected classroom routine for students to become proficient writers.

  • 4 Teaching Philosophy Statement Examples
  • Questions for Each Level of Bloom's Taxonomy
  • Creating and Scoring Essay Tests
  • Bloom's Taxonomy in the Classroom
  • An Overview of the Common Core Assessments
  • The Study Island Program: An In-Depth Review
  • T.E.S.T. Season for Grades 7-12
  • 5 Free Assessment Apps for Teachers
  • Reading Comprehension Practice Questions
  • Tips to Create Effective Matching Questions for Assessments
  • Authentic Ways to Develop Performance-Based Activities
  • 3 Surveys for Student Feedback to Improve Instruction
  • Inference: A Critical Assumption
  • Methods for Presenting Subject Matter
  • GMAT Exam Structure, Timing, and Scoring
  • How Depth of Knowledge Drives Learning and Assessment


Assessment: Examples from a range of disciplines

  • Aligning outcomes, assessment, and instruction
  • Examples from a range of disciplines
  • Examples from carpentry
  • Examples from electrical
  • Examples from english
  • Examples from english language development
  • Examples from marketing
  • Examples from practical nursing
  • Examples from psychology
  • Types of assessment
  • Test construction basics
  • Multiple choice questions
  • Short answer questions
  • Completion questions
  • Matching questions
  • True/false questions
  • Take home/open book
  • Reducing test anxiety
  • Benefits and challenges
  • Planning and implementing group work
  • Preparing students for group work
  • Assessing group work
  • Additional resources
  • Writing clear assignments
  • Assessment and AI This link opens in a new window
  • Academic integrity
  • Further readings

What does a course with aligned learning outcomes, assessment, and teaching and learning activities look like?

We are currently collecting Camosun examples from across the college (see below), within the cognitive, psychomotor, and affective domains of learning. We will continue adding to this list. Meanwhile, for additional ideas, check out the discipline specific examples from the  University of Waterloo . 

Examples for different disciplines

English Language Development

Practical Nursing

Examples from the psychomotor domain: Carpentry

Examples from the cognitive domain: electrical, examples from the affective domain: english, examples from the cognitive domain: english language development, examples from the cognitive domain: marketing, examples for the cognitive and psychomotor domains: practical nursing, examples from the cognitive domain: psychology.

  • << Previous: Aligning outcomes, assessment, and instruction
  • Next: Examples from carpentry >>
  • Last Updated: Nov 10, 2023 11:13 AM
  • URL:
  • help_outline help

iRubric: Restricted Response Essay Rubric

  • restricted response
  • Social Sciences

restricted response essay questions

Examples of Restricted-Response Performance Tasks

  • Small Business
  • Business Planning & Strategy
  • Assessments
  • ')" data-event="social share" data-info="Pinterest" aria-label="Share on Pinterest">
  • ')" data-event="social share" data-info="Reddit" aria-label="Share on Reddit">
  • ')" data-event="social share" data-info="Flipboard" aria-label="Share on Flipboard">

How to Tether a Smartphone to an Android Tablet

Alternative hr selection techniques, the advantages of using a separate payroll banking account.

  • Coaching and Mentoring Processes
  • HR Training & Development Methods

A restricted-response assessment performance task is a teaching tool. Corporations and some smaller companies use this specifically for training new employees or updating skills. Assessment tools for corporate training are varied depending on the position being trained and the focus of the office. Some offices and departments are more specialized than others. A “restricted-response” approach is the simplest teaching tool, and is usually a short answer or fill-in-the-blank style question.

Basics of These Tasks

An assessment task is restricted in its response if it is highly specific, and, generally, contains only one correct answer, explains the University of Delaware . Typical examples might be a multiple choice or true-false type of question. More active styles of task would be an emergency response in a factory.

In that case, factory workers would be drilled to see if they know the proper response for different emergencies. A drill would then be arranged and the response to an emergency assessed. This is a restricted response in that there is a single proper procedure and a specific set of actions requested.

Sarbanes-Oxley Test

One type of restricted-performance task is a written test on new laws. The Sarbanes-Oxley Act of 2002 forced corporate accountants to grasp new skills, according to . In this context, the head of accounting might put his workers through a restricted task of filling out new reports. He may give make-believe data that the accountants must then synthesize in the new forms to satisfy the new act. In this case, the detail of the reports must be much greater than had been accepted before. This would be the focus of the assessment.

Computer Software Test

Retraining workers on the new software is another restricted task example. Since the passage of Oxley in 2002, corporations needed to update their accounting software to increase the reliability, detail and comprehensiveness of corporate reports and bookkeeping. This means that all accountants, comptrollers and finance officers must be trained in the newer, updated software. In some cases, company lawyers specializing in financial disclosures would also be tested in this knowledge.

A simple run-through assessment is normally required here. This becomes a highly restricted task because the workers must show they are capable of using the equipment well. This is restricted because there is one program and one proper way to use it. One of the advantages of restricted response questions is that they are not “open-ended,” like an interpretive essay would be.

Security Issue Tasks

Security officials in sensitive industries also are often required to engage in restricted response assessments. In fields such as nuclear energy and environmental science, weapons laboratories or high-technological military firms such as TRW or Oracle, security personnel are essential. Here, there are several examples of restricted assessments.

One might be a typical paper exam dealing with their responsibilities, especially under a heightened threat of terrorism. Another might be a set of drills dealing with security responses to specific situations such as a power failure, explosion, fuel leak or criminal trespass. In this case, security personnel would be drilled in the necessary and proper response under controlled conditions.

  • University of Delaware: Overheads for Unit 8--Chapter 11 (Performance-Based Assessment)
  • A Guide To The Sarbanes-Oxley Act

Related Articles

Oral questioning as an evaluation strategy, how to turn on real-time protection for microsoft security essentials, how to remove encryption from a pdf file, business security requirements, competency assessment instruments, what is the difference between active & passive vulnerability scanners, procedures in training employees, how to create a new web server certificate, how to change proxy settings in mozilla firefox, most popular.

  • 1 Oral Questioning as an Evaluation Strategy
  • 2 How to Turn on Real-Time Protection for Microsoft Security Essentials
  • 3 How to Remove Encryption From a PDF File
  • 4 Business Security Requirements


  1. Chapter 10 Measuring Complex Achievement Essay Questions BY

    restricted response essay questions

  2. Constructing restricted-response essay questions

    restricted response essay questions

  3. Essay question construction

    restricted response essay questions

  4. Restricted-Response Essay Rubric

    restricted response essay questions

  5. [KICE Card News] Restricted and Extended Response Essay Assessment

    restricted response essay questions

  6. Examples of restricted response essay questions by chriswpyut

    restricted response essay questions


  1. Constructing Restricted Response


  3. How to Develop Short and Long Questions: CRQs, RRQs & ERQs

  4. Extended Response, Episode I

  5. From Failing to 10/10

  6. Open ended & close ended questions/tests, Extended response, restricted response test/questions


  1. Overheads for Unit 7--Chapter 10 (Essay Questions)

    Overheads for Unit 7--Chapter 10 (Essay Questions) They represent a continuum in how much freedom of response is allowed, ranging from restricted-response essays on one end to extended-response essays on the other. Represent a continuum in complexity and breadth of learning outcomes assessed, with interpretive exercises on the left end ...

  2. Classroom Assessment

    There are 2 major categories of essay questions -- short response (also referred to as restricted or brief) and extended response. Short Response. Short response questions are more focused and constrained than extended response questions. For example, a short response might ask a student to "write an example," "list three reasons," or "compare ...

  3. Tips for Creating and Scoring Essay Tests

    Restricted Response - These essay questions limit what the student will discuss in the essay based on the wording of the question. For example, "State the main differences between John Adams' and Thomas Jefferson's beliefs about federalism," is a restricted response. What the student is to write about has been expressed to them within the question.

  4. Types of Questions in Teacher Made Achievement Tests: A Comprehensive

    Restricted Response Essay Questions. Reflective Essays: These typically involve a shorter response, reflecting on a specific question or scenario. Analysis Essays: Students dissect a particular concept or event within a set framework. Restricted response essay questions are valuable for assessing specific skills or knowledge within a limited ...

  5. PDF Guidelines for writing test questions S Selected-response test questions

    7. Restricted-response essay questions should be framed in a way to provide students with boundaries for their responses. For example, rather than a question that asks 'explain the workings of the digestive system', a better question would be 'in one page, explain the role of the large intestine in the digestion process'.

  6. Supply Test Items

    Restricted response questions place strict limits on the answer to be given, and are narrowly defined by the problem and the specific form of the answer is commonly indicated. ... Restricted Response Essay: Describe the relative merits of selection-type test items and essay questions for measurable learning outcomes at the understanding level ...

  7. Extended Constructed Response Prompts

    Once you've selected your pair of high-interest texts, you're ready to write the essay prompt. STEP 2: Write an Aligned, Extended-Response Prompt. To write an aligned, extended-response prompt, start by reading an example extended-response prompt from a released state test. Here is a sample prompt from a 7th grade Smarter Balanced assessment:

  8. Best Practices for Designing and Grading Exams

    Restricted response essay questions are good for assessing basic knowledge and understanding and generally require a brief written response (e.g., "State two hypotheses about why birds migrate. Summarize the evidence supporting each hypothesis" [Worthen, et al., 1993, p. 277].)

  9. PDF Writing Better Essay Exams IDEA Paper #76 March 2019

    for extended-response essay-test item design, implementation, and evaluation. A Few Definitions Before examining the creation and implementation of essay exams, it is worthwhile to clarify some important terms. There are two broad types of "essay" exam items (Clay, 2001; Nilson, 2017). Restricted-response, or short-answer, questions likely have

  10. How an Extended Response Item Can Enhance Learning

    "Extended response items" have traditionally been called "essay questions." An extended response item is an open-ended question that begins with some type of prompt. These questions allow students to write a response that arrives at a conclusion based on their specific knowledge of the topic. An extended response item takes considerable time ...

  11. PDF Essay Items

    called restricted) questions. An open-ended (or unrestricted response) essay question is one where there are no restrictions on the response, including the amount of time allowed to finish, the number of pages written, or material included. Now, it is a bit impractical to allow test takers to have 25 hours to answer one essay question or to ...

  12. Essay Tests: Use, Development, and Grading

    essay questions allow an extended response; they can lead to disjointed, irrelevant, superficial, or unexpected discus-sions by students who have difficulty organizing their thoughts on paper. The third type of essay question suggests a focused response and can lead to simple recall of infor-mation and a mass of details. Value of Essay Questions

  13. How to Successfully Write Constructed-Response Essays

    1. Read the prompt/question carefully. If you misread the question, you could write the most fantastic essay ever - and still fail. Making sure you understand the question being asked is the #1 most important thing students need to do during standardized testing. Teachers can drill this fact during their writing class.

  14. Educational Assessment: Alex

    Question Write two essay questions using both an extended-response format and a restricted-response format. Your extended-response question should be targeted to measure a synthesis or evaluation objective, while the latter should be targeted to measure a comprehension, application, or analysis objective.

  15. iRubric: Restricted Response Question rubric

    Restricted Response Question. Short Essay Questions. Use this rubric for grading student responses that are part of a test or quiz that include other types of questions as well. Can be customized for any subject. Rubric Code: L35C84.

  16. essat type question

    Advantages of Restricted Response Questions • Restricted response question is more structured. • Measure specific learning outcomes. • Restricted response provide for more ease of assessment. • Any outcomes measured by an objective interpretive exercise can be measured by a restricted response essay question.

  17. Research guides: Assessment: Examples from a range of disciplines

    Matching items questions. Restricted response essay question. Students write descriptions of resistors and electrical meters and peer assess using instructor-generated rubric; Apply. Effectively install cables, fixtures and fittings. Think-Pair-Share with a partner about how to organize the list

  18. iRubric: Restricted Response Essay Rubric

    Major Project. Restricted Response Essay Items. 70 % Analyze gender roles and their relationship to sexual harassment in schools. Unsatisfactory. 0 pts. Needs Improvement. 1 pts. Good. 3 pts.

  19. Examples of Restricted-Response Performance Tasks

    A "restricted-response" approach is the simplest teaching tool, and is usually a short answer or fill-in-the-blank style question. ... One of the advantages of restricted response questions is ...

  20. Essay type test

    Restricted Response Essay Questions Restricted response usually limits both the content and the response by restricting the scope of the topic to be discussed. Useful for measuring learning outcomes requiring interpretation and application of data in a specific area. 9.

  21. WED 463 CH 8 study guide Flashcards

    Restricted response essay questions are more easily scored than extended response essays. True. Having two or more persons grade each essay question is the best way to check the reliability of scoring. True. Only grade spelling, grammar, and punctuation of supply-type answer when they relate to the intended learning solution.

  22. DRC1- chapter 8 Flashcards

    1. Use essay questions to measure complex learning outcomes only. 2. Relate the questions as directly as possible to the learning outcomes being measured. 3. Formulate questions that present a clear task to be performed. 4. Do not permit a choice of questions unless the learning outcome requires it.