Center for Teaching and Learning

Step 4: develop assessment criteria and rubrics.

Just as we align assessments with the course learning objectives, we also align the grading criteria for each assessment with the goals of that unit of content or practice, especially for assignments than cannot be graded through automation the way that multiple-choice tests can. Grading criteria articulate what is important in each assessment, what knowledge or skills students should be able to demonstrate, and how they can best communicate that to you. When you share grading criteria with students, you help them understand what to focus on and how to demonstrate their learning successfully. From good assessment criteria, you can develop a grading rubric .

Develop Your Assessment Criteria | Decide on a Rating Scale | Create the Rubric

Developing Your Assessment Criteria

Good assessment criteria are

  • Clear and easy to understand as a guide for students
  • Attainable rather than beyond students’ grasp in the current place in the course
  • Significant in terms of the learning students should demonstrate
  • Relevant in that they assess student learning toward course objectives related to that one assessment.

To create your grading criteria, consider the following questions:

  • What is the most significant content or knowledge students should be able to demonstrate understanding of at this point in the course?
  • What specific skills, techniques, or applications should students be able to use to demonstrate using at this point in the course?
  • What secondary skills or practices are important for students to demonstrate in this assessment? (for example, critical thinking, public speaking skills, or writing as well as more abstract concepts such as completeness, creativity, precision, or problem-solving abilities)
  • Do the criteria align with the objectives for both the assessment and the course?

Once you have developed some ideas about the assessment’s grading criteria, double-check to make sure the criteria are observable, measurable, significant, and distinct from each other.

Assessment Criteria Example Using the questions above, the performance criteria in the example below were designed for an assignment in which students had to create an explainer video about a scientific concept for a specified audience. Each elements can be observed and measured based on both expert instructor and peer feedback, and each is significant because it relates to the course and assignment learning goals.

criteria based assessment presentation

Additional Assessment Criteria Resources Developing Grading Criteria (Vanderbilt University) Creating Grading Criteria (Brown University) Sample Criteria (Brown University) Developing Grading Criteria (Temple University)

Decide on a Rating Scale

Deciding what scale you will use for an assessment depends on the type of learning you want students to demonstrate and the type of feedback you want to give students on this particular assignment or test. For example, for an introductory lab report early in the semester, you might not be as concerned with advanced levels of precision as much as correct displays of data and the tone of the report; therefore, grading heavily on copy editing or advanced analysis would not be appropriate. The criteria would likely more rigorous by the end of the semester, as you build up to the advanced level you want students to reach in the course.

Rating scales turn the grading criteria you have defined into levels of performance expectations for the students that can then be interpreted as a letter, number, or level. Common rating scales include

  • A, B, C, etc. (without or without + and -)
  • 100 point scale with defined cut-off for a letter grade if desired (ex. a B = 89-80; or a B+ = 89-87, B = 86-83, B- = 82-80)
  • Yes or no, present or not present (if the rubric is a checklist of items students must show)
  • below expectations, meets expectations, exceeds expectations
  • not demonstrated, poor, average, good, excellent

Once you have decided on a scale for the type of assignment and the learning you want students to demonstrate, you can use the scale to clearly articulate what each level of performance looks like, such as defining what A, B, C, etc. level work would look like for each grading criteria. What would distinguish a student who earns a B from one who earns a C? What would distinguish a student who excelled in demonstrating use of a tool from a student who clearly was not familiar with it? Write these distinctions out in descriptive notes or brief paragraphs.

​ Ethical Implications of Rating Scales There are ethical implications in each of these types of rating skills. On a project worth 100 points, what is the objective difference between earning an 85 or and 87? On an exceeds/meets/does not meet scale, how can those levels be objectively applied? Different understandings of "fairness" can lead to several ways of grading that might disadvantage some students.  Learn more about equitable grading practices here.

Create the Rubric

Rubrics Can Make Grading More Effective

  • Provide students with more complete and targeted feedback
  • Make grading more timely by enabling the provision of feedback soon after assignment is submitted/presented.
  • Standardize assessment criteria among those assigning/assessing the same assignment.
  • Facilitate peer evaluation of early drafts of assignment.

Rubrics Can Help Student Learning

  • Convey your expectations about the assignment through a classroom discussion of the rubric prior to the beginning of the assignment
  • Level the playing field by clarifying academic expectations and assignments so that all students understand regardless of their educational backgrounds.(e.g. define what we expect analysis, critical thinking, or even introductions/conclusions should include)
  • Promote student independence and motivation by enabling self-assessment
  • Prepare students to use detailed feedback.

Rubrics Have Other Uses:

  • Track development of student skills over several assignments
  • Facilitate communication with others (e.g. TAs, communication center, tutors, other faculty, etc)
  • Refine own teaching skills (e.g. by responding to common areas of weaknesses, feedback on how well teaching strategies are working in preparing students for their assignments).

In this video, CTL's Dr. Carol Subino Sullivan discusses the value of the different types of rubrics.

Many non-test-based assessments might seem daunting to grade, but a well-designed rubric can alleviate some of that work. A rubric is a table that usually has these parts:  

  • a clear description of the learning activity being assessed
  • criteria by which the activity will be evaluated
  • a rating scale identifying different levels of performance
  • descriptions of the level of performance a student must reach to earn that level.  

When you define the criteria and pre-define what acceptable performance for each of those criteria looks like ahead of time, you can use the rubric to compare with student work and assign grades or points for each criteria accordingly. Rubrics work very well for projects, papers/reports, and presentations , as well as in peer review, and good rubrics can save instructors and TAs time when grading .  

Sample Rubrics This final rubric for the scientific concept explainer video combines the assessment criteria and the holistic rating scale:

criteria based assessment presentation

When using this rubric, which can be easily adapted to use a present/not present rating scale or a letter grade scale, you can use a combination of checking items off and adding written (or audio/video) comments in the different boxes to provide the student more detailed feedback. 

As a second example, this descriptive rubric was used to ask students to peer assess and self-assess their contributions to a collaborative project. The rating scale is 1 through 4, and each description of performance builds on the previous. ( See the full rubric with scales for both product and process here. This rubric was designed for students working in teams to assess their own contributions to the project as well as their peers.)

criteria based assessment presentation

Building a Rubric in Canvas Assignments You can create rubrics for assignments and discussions boards in Canvas. Review these Canvas guides for tips and tricks. Rubrics Overview for Instructors What are rubrics?  How do I align a rubric with a learning outcome? How do I add a rubric to an assignment? How do I add a rubric to a quiz? How do I add a rubric to a graded discussion? How do I use a rubric to grade submissions in SpeedGrader? How do I manage rubrics in a course?

Additional Resources for Developing Rubrics Designing Grading Rubrics  (Brown University) Step-by-step process for creating an effective, fair, and efficient grading rubric. 

Creating and Using Rubrics  (Carnegie Mellon University) Explores the basics of rubric design along with multiple examples for grading different types of assignments.

Using Rubrics  (Cornell University) Argument for the value of rubrics to support student learning.

Rubrics  (University of California Berkeley) Shares "fun facts" about rubrics, and links the rubric guidelines from many higher ed organizations such as the AAC&U.

Creating and Using Rubrics  (Yale University) Introduces different styles of rubrics and ways to decide what style to use given your course's learning goals.

Best Practices for Designing Effective Resources (Arizona State University) Comprehensive overview of rubric design principles.

  Return to Main Menu | Return to Step 3 | Go to Step 5 Determine Feedback Strategy

Accessibility Information

Download Microsoft Products   >      Download Adobe Reader   >

Rubric Best Practices, Examples, and Templates

A rubric is a scoring tool that identifies the different criteria relevant to an assignment, assessment, or learning outcome and states the possible levels of achievement in a specific, clear, and objective way. Use rubrics to assess project-based student work including essays, group projects, creative endeavors, and oral presentations.

Rubrics can help instructors communicate expectations to students and assess student work fairly, consistently and efficiently. Rubrics can provide students with informative feedback on their strengths and weaknesses so that they can reflect on their performance and work on areas that need improvement.

How to Get Started

Best practices, moodle how-to guides.

  • Workshop Recording (Fall 2022)
  • Workshop Registration

Step 1: Analyze the assignment

The first step in the rubric creation process is to analyze the assignment or assessment for which you are creating a rubric. To do this, consider the following questions:

  • What is the purpose of the assignment and your feedback? What do you want students to demonstrate through the completion of this assignment (i.e. what are the learning objectives measured by it)? Is it a summative assessment, or will students use the feedback to create an improved product?
  • Does the assignment break down into different or smaller tasks? Are these tasks equally important as the main assignment?
  • What would an “excellent” assignment look like? An “acceptable” assignment? One that still needs major work?
  • How detailed do you want the feedback you give students to be? Do you want/need to give them a grade?

Step 2: Decide what kind of rubric you will use

Types of rubrics: holistic, analytic/descriptive, single-point

Holistic Rubric. A holistic rubric includes all the criteria (such as clarity, organization, mechanics, etc.) to be considered together and included in a single evaluation. With a holistic rubric, the rater or grader assigns a single score based on an overall judgment of the student’s work, using descriptions of each performance level to assign the score.

Advantages of holistic rubrics:

  • Can p lace an emphasis on what learners can demonstrate rather than what they cannot
  • Save grader time by minimizing the number of evaluations to be made for each student
  • Can be used consistently across raters, provided they have all been trained

Disadvantages of holistic rubrics:

  • Provide less specific feedback than analytic/descriptive rubrics
  • Can be difficult to choose a score when a student’s work is at varying levels across the criteria
  • Any weighting of c riteria cannot be indicated in the rubric

Analytic/Descriptive Rubric . An analytic or descriptive rubric often takes the form of a table with the criteria listed in the left column and with levels of performance listed across the top row. Each cell contains a description of what the specified criterion looks like at a given level of performance. Each of the criteria is scored individually.

Advantages of analytic rubrics:

  • Provide detailed feedback on areas of strength or weakness
  • Each criterion can be weighted to reflect its relative importance

Disadvantages of analytic rubrics:

  • More time-consuming to create and use than a holistic rubric
  • May not be used consistently across raters unless the cells are well defined
  • May result in giving less personalized feedback

Single-Point Rubric . A single-point rubric is breaks down the components of an assignment into different criteria, but instead of describing different levels of performance, only the “proficient” level is described. Feedback space is provided for instructors to give individualized comments to help students improve and/or show where they excelled beyond the proficiency descriptors.

Advantages of single-point rubrics:

  • Easier to create than an analytic/descriptive rubric
  • Perhaps more likely that students will read the descriptors
  • Areas of concern and excellence are open-ended
  • May removes a focus on the grade/points
  • May increase student creativity in project-based assignments

Disadvantage of analytic rubrics: Requires more work for instructors writing feedback

Step 3 (Optional): Look for templates and examples.

You might Google, “Rubric for persuasive essay at the college level” and see if there are any publicly available examples to start from. Ask your colleagues if they have used a rubric for a similar assignment. Some examples are also available at the end of this article. These rubrics can be a great starting point for you, but consider steps 3, 4, and 5 below to ensure that the rubric matches your assignment description, learning objectives and expectations.

Step 4: Define the assignment criteria

Make a list of the knowledge and skills are you measuring with the assignment/assessment Refer to your stated learning objectives, the assignment instructions, past examples of student work, etc. for help.

  Helpful strategies for defining grading criteria:

  • Collaborate with co-instructors, teaching assistants, and other colleagues
  • Brainstorm and discuss with students
  • Can they be observed and measured?
  • Are they important and essential?
  • Are they distinct from other criteria?
  • Are they phrased in precise, unambiguous language?
  • Revise the criteria as needed
  • Consider whether some are more important than others, and how you will weight them.

Step 5: Design the rating scale

Most ratings scales include between 3 and 5 levels. Consider the following questions when designing your rating scale:

  • Given what students are able to demonstrate in this assignment/assessment, what are the possible levels of achievement?
  • How many levels would you like to include (more levels means more detailed descriptions)
  • Will you use numbers and/or descriptive labels for each level of performance? (for example 5, 4, 3, 2, 1 and/or Exceeds expectations, Accomplished, Proficient, Developing, Beginning, etc.)
  • Don’t use too many columns, and recognize that some criteria can have more columns that others . The rubric needs to be comprehensible and organized. Pick the right amount of columns so that the criteria flow logically and naturally across levels.

Step 6: Write descriptions for each level of the rating scale

Artificial Intelligence tools like Chat GPT have proven to be useful tools for creating a rubric. You will want to engineer your prompt that you provide the AI assistant to ensure you get what you want. For example, you might provide the assignment description, the criteria you feel are important, and the number of levels of performance you want in your prompt. Use the results as a starting point, and adjust the descriptions as needed.

Building a rubric from scratch

For a single-point rubric , describe what would be considered “proficient,” i.e. B-level work, and provide that description. You might also include suggestions for students outside of the actual rubric about how they might surpass proficient-level work.

For analytic and holistic rubrics , c reate statements of expected performance at each level of the rubric.

  • Consider what descriptor is appropriate for each criteria, e.g., presence vs absence, complete vs incomplete, many vs none, major vs minor, consistent vs inconsistent, always vs never. If you have an indicator described in one level, it will need to be described in each level.
  • You might start with the top/exemplary level. What does it look like when a student has achieved excellence for each/every criterion? Then, look at the “bottom” level. What does it look like when a student has not achieved the learning goals in any way? Then, complete the in-between levels.
  • For an analytic rubric , do this for each particular criterion of the rubric so that every cell in the table is filled. These descriptions help students understand your expectations and their performance in regard to those expectations.

Well-written descriptions:

  • Describe observable and measurable behavior
  • Use parallel language across the scale
  • Indicate the degree to which the standards are met

Step 7: Create your rubric

Create your rubric in a table or spreadsheet in Word, Google Docs, Sheets, etc., and then transfer it by typing it into Moodle. You can also use online tools to create the rubric, but you will still have to type the criteria, indicators, levels, etc., into Moodle. Rubric creators: Rubistar , iRubric

Step 8: Pilot-test your rubric

Prior to implementing your rubric on a live course, obtain feedback from:

  • Teacher assistants

Try out your new rubric on a sample of student work. After you pilot-test your rubric, analyze the results to consider its effectiveness and revise accordingly.

  • Limit the rubric to a single page for reading and grading ease
  • Use parallel language . Use similar language and syntax/wording from column to column. Make sure that the rubric can be easily read from left to right or vice versa.
  • Use student-friendly language . Make sure the language is learning-level appropriate. If you use academic language or concepts, you will need to teach those concepts.
  • Share and discuss the rubric with your students . Students should understand that the rubric is there to help them learn, reflect, and self-assess. If students use a rubric, they will understand the expectations and their relevance to learning.
  • Consider scalability and reusability of rubrics. Create rubric templates that you can alter as needed for multiple assignments.
  • Maximize the descriptiveness of your language. Avoid words like “good” and “excellent.” For example, instead of saying, “uses excellent sources,” you might describe what makes a resource excellent so that students will know. You might also consider reducing the reliance on quantity, such as a number of allowable misspelled words. Focus instead, for example, on how distracting any spelling errors are.

Example of an analytic rubric for a final paper

Example of a holistic rubric for a final paper, single-point rubric, more examples:.

  • Single Point Rubric Template ( variation )
  • Analytic Rubric Template make a copy to edit
  • A Rubric for Rubrics
  • Bank of Online Discussion Rubrics in different formats
  • Mathematical Presentations Descriptive Rubric
  • Math Proof Assessment Rubric
  • Kansas State Sample Rubrics
  • Design Single Point Rubric

Technology Tools: Rubrics in Moodle

  • Moodle Docs: Rubrics
  • Moodle Docs: Grading Guide (use for single-point rubrics)

Tools with rubrics (other than Moodle)

  • Google Assignments
  • Turnitin Assignments: Rubric or Grading Form

Other resources

  • DePaul University (n.d.). Rubrics .
  • Gonzalez, J. (2014). Know your terms: Holistic, Analytic, and Single-Point Rubrics . Cult of Pedagogy.
  • Goodrich, H. (1996). Understanding rubrics . Teaching for Authentic Student Performance, 54 (4), 14-17. Retrieved from   
  • Miller, A. (2012). Tame the beast: tips for designing and using rubrics.
  • Ragupathi, K., Lee, A. (2020). Beyond Fairness and Consistency in Grading: The Role of Rubrics in Higher Education. In: Sanger, C., Gleason, N. (eds) Diversity and Inclusion in Global Higher Education. Palgrave Macmillan, Singapore.

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Am J Pharm Educ
  • v.74(9); 2010 Nov 10

A Standardized Rubric to Evaluate Student Presentations

Michael j. peeters.

a University of Toledo College of Pharmacy

Eric G. Sahloff

Gregory e. stone.

b University of Toledo College of Education

To design, implement, and assess a rubric to evaluate student presentations in a capstone doctor of pharmacy (PharmD) course.

A 20-item rubric was designed and used to evaluate student presentations in a capstone fourth-year course in 2007-2008, and then revised and expanded to 25 items and used to evaluate student presentations for the same course in 2008-2009. Two faculty members evaluated each presentation.

The Many-Facets Rasch Model (MFRM) was used to determine the rubric's reliability, quantify the contribution of evaluator harshness/leniency in scoring, and assess grading validity by comparing the current grading method with a criterion-referenced grading scheme. In 2007-2008, rubric reliability was 0.98, with a separation of 7.1 and 4 rating scale categories. In 2008-2009, MFRM analysis suggested 2 of 98 grades be adjusted to eliminate evaluator leniency, while a further criterion-referenced MFRM analysis suggested 10 of 98 grades should be adjusted.

The evaluation rubric was reliable and evaluator leniency appeared minimal. However, a criterion-referenced re-analysis suggested a need for further revisions to the rubric and evaluation process.

INTRODUCTION

Evaluations are important in the process of teaching and learning. In health professions education, performance-based evaluations are identified as having “an emphasis on testing complex, ‘higher-order’ knowledge and skills in the real-world context in which they are actually used.” 1 Objective structured clinical examinations (OSCEs) are a common, notable example. 2 On Miller's pyramid, a framework used in medical education for measuring learner outcomes, “knows” is placed at the base of the pyramid, followed by “knows how,” then “shows how,” and finally, “does” is placed at the top. 3 Based on Miller's pyramid, evaluation formats that use multiple-choice testing focus on “knows” while an OSCE focuses on “shows how.” Just as performance evaluations remain highly valued in medical education, 4 authentic task evaluations in pharmacy education may be better indicators of future pharmacist performance. 5 Much attention in medical education has been focused on reducing the unreliability of high-stakes evaluations. 6 Regardless of educational discipline, high-stakes performance-based evaluations should meet educational standards for reliability and validity. 7

PharmD students at University of Toledo College of Pharmacy (UTCP) were required to complete a course on presentations during their final year of pharmacy school and then give a presentation that served as both a capstone experience and a performance-based evaluation for the course. Pharmacists attending the presentations were given Accreditation Council for Pharmacy Education (ACPE)-approved continuing education credits. An evaluation rubric for grading the presentations was designed to allow multiple faculty evaluators to objectively score student performances in the domains of presentation delivery and content. Given the pass/fail grading procedure used in advanced pharmacy practice experiences, passing this presentation-based course and subsequently graduating from pharmacy school were contingent upon this high-stakes evaluation. As a result, the reliability and validity of the rubric used and the evaluation process needed to be closely scrutinized.

Each year, about 100 students completed presentations and at least 40 faculty members served as evaluators. With the use of multiple evaluators, a question of evaluator leniency often arose (ie, whether evaluators used the same criteria for evaluating performances or whether some evaluators graded easier or more harshly than others). At UTCP, opinions among some faculty evaluators and many PharmD students implied that evaluator leniency in judging the students' presentations significantly affected specific students' grades and ultimately their graduation from pharmacy school. While it was plausible that evaluator leniency was occurring, the magnitude of the effect was unknown. Thus, this study was initiated partly to address this concern over grading consistency and scoring variability among evaluators.

Because both students' presentation style and content were deemed important, each item of the rubric was weighted the same across delivery and content. However, because there were more categories related to delivery than content, an additional faculty concern was that students feasibly could present poor content but have an effective presentation delivery and pass the course.

The objectives for this investigation were: (1) to describe and optimize the reliability of the evaluation rubric used in this high-stakes evaluation; (2) to identify the contribution and significance of evaluator leniency to evaluation reliability; and (3) to assess the validity of this evaluation rubric within a criterion-referenced grading paradigm focused on both presentation delivery and content.

The University of Toledo's Institutional Review Board approved this investigation. This study investigated performance evaluation data for an oral presentation course for final-year PharmD students from 2 consecutive academic years (2007-2008 and 2008-2009). The course was taken during the fourth year (P4) of the PharmD program and was a high-stakes, performance-based evaluation. The goal of the course was to serve as a capstone experience, enabling students to demonstrate advanced drug literature evaluation and verbal presentations skills through the development and delivery of a 1-hour presentation. These presentations were to be on a current pharmacy practice topic and of sufficient quality for ACPE-approved continuing education. This experience allowed students to demonstrate their competencies in literature searching, literature evaluation, and application of evidence-based medicine, as well as their oral presentation skills. Students worked closely with a faculty advisor to develop their presentation. Each class (2007-2008 and 2008-2009) was randomly divided, with half of the students taking the course and completing their presentation and evaluation in the fall semester and the other half in the spring semester. To accommodate such a large number of students presenting for 1 hour each, it was necessary to use multiple rooms with presentations taking place concurrently over 2.5 days for both the fall and spring sessions of the course. Two faculty members independently evaluated each student presentation using the provided evaluation rubric. The 2007-2008 presentations involved 104 PharmD students and 40 faculty evaluators, while the 2008-2009 presentations involved 98 students and 46 faculty evaluators.

After vetting through the pharmacy practice faculty, the initial rubric used in 2007-2008 focused on describing explicit, specific evaluation criteria such as amounts of eye contact, voice pitch/volume, and descriptions of study methods. The evaluation rubric used in 2008-2009 was similar to the initial rubric, but with 5 items added (Figure ​ (Figure1). 1 ). The evaluators rated each item (eg, eye contact) based on their perception of the student's performance. The 25 rubric items had equal weight (ie, 4 points each), but each item received a rating from the evaluator of 1 to 4 points. Thus, only 4 rating categories were included as has been recommended in the literature. 8 However, some evaluators created an additional 3 rating categories by marking lines in between the 4 ratings to signify half points ie, 1.5, 2.5, and 3.5. For example, for the “notecards/notes” item in Figure ​ Figure1, 1 , a student looked at her notes sporadically during her presentation, but not distractingly nor enough to warrant a score of 3 in the faculty evaluator's opinion, so a 3.5 was given. Thus, a 7-category rating scale (1, 1.5, 2, 2.5. 3, 3.5, and 4) was analyzed. Each independent evaluator's ratings for the 25 items were summed to form a score (0-100%). The 2 evaluators' scores then were averaged and a letter grade was assigned based on the following scale: >90% = A, 80%-89% = B, 70%-79% = C, <70% = F.

An external file that holds a picture, illustration, etc.
Object name is ajpe171fig1.jpg

Rubric used to evaluate student presentations given in a 2008-2009 capstone PharmD course.

EVALUATION AND ASSESSMENT

Rubric reliability.

To measure rubric reliability, iterative analyses were performed on the evaluations using the Many-Facets Rasch Model (MFRM) following the 2007-2008 data collection period. While Cronbach's alpha is the most commonly reported coefficient of reliability, its single number reporting without supplementary information can provide incomplete information about reliability. 9 - 11 Due to its formula, Cronbach's alpha can be increased by simply adding more repetitive rubric items or having more rating scale categories, even when no further useful information has been added. The MFRM reports separation , which is calculated differently than Cronbach's alpha, is another source of reliability information. Unlike Cronbach's alpha, separation does not appear enhanced by adding further redundant items. From a measurement perspective, a higher separation value is better than a lower one because students are being divided into meaningful groups after measurement error has been accounted for. Separation can be thought of as the number of units on a ruler where the more units the ruler has, the larger the range of performance levels that can be measured among students. For example, a separation of 4.0 suggests 4 graduations such that a grade of A is distinctly different from a grade of B, which in turn is different from a grade of C or of F. In measuring performances, a separation of 9.0 is better than 5.5, just as a separation of 7.0 is better than a 6.5; a higher separation coefficient suggests that student performance potentially could be divided into a larger number of meaningfully separate groups.

The rating scale can have substantial effects on reliability, 8 while description of how a rating scale functions is a unique aspect of the MFRM. With analysis iterations of the 2007-2008 data, the number of rating scale categories were collapsed consecutively until improvements in reliability and/or separation were no longer found. The last positive iteration that led to positive improvements in reliability or separation was deemed an optimal rating scale for this evaluation rubric.

In the 2007-2008 analysis, iterations of the data where run through the MFRM. While only 4 rating scale categories had been included on the rubric, because some faculty members inserted 3 in-between categories, 7 categories had to be included in the analysis. This initial analysis based on a 7-category rubric provided a reliability coefficient (similar to Cronbach's alpha) of 0.98, while the separation coefficient was 6.31. The separation coefficient denoted 6 distinctly separate groups of students based on the items. Rating scale categories were collapsed, with “in-between” categories included in adjacent full-point categories. Table ​ Table1 1 shows the reliability and separation for the iterations as the rating scale was collapsed. As shown, the optimal evaluation rubric maintained a reliability of 0.98, but separation improved the reliability to 7.10 or 7 distinctly separate groups of students based on the items. Another distinctly separate group was added through a reduction in the rating scale while no change was seen to Cronbach's alpha, even though the number of rating scale categories was reduced. Table ​ Table1 1 describes the stepwise, sequential pattern across the final 4 rating scale categories analyzed. Informed by the 2007-2008 results, the 2008-2009 evaluation rubric (Figure ​ (Figure1) 1 ) used 4 rating scale categories and reliability remained high.

Evaluation Rubric Reliability and Separation with Iterations While Collapsing Rating Scale Categories.

An external file that holds a picture, illustration, etc.
Object name is ajpe171tbl1.jpg

a Reliability coefficient of variance in rater response that is reproducible (ie, Cronbach's alpha).

b Separation is a coefficient of item standard deviation divided by average measurement error and is an additional reliability coefficient.

c Optimal number of rating scale categories based on the highest reliability (0.98) and separation (7.1) values.

Evaluator Leniency

Described by Fleming and colleagues over half a century ago, 6 harsh raters (ie, hawks) or lenient raters (ie, doves) have also been demonstrated in more recent studies as an issue as well. 12 - 14 Shortly after 2008-2009 data were collected, those evaluations by multiple faculty evaluators were collated and analyzed in the MFRM to identify possible inconsistent scoring. While traditional interrater reliability does not deal with this issue, the MFRM had been used previously to illustrate evaluator leniency on licensing examinations for medical students and medical residents in the United Kingdom. 13 Thus, accounting for evaluator leniency may prove important to grading consistency (and reliability) in a course using multiple evaluators. Along with identifying evaluator leniency, the MFRM also corrected for this variability. For comparison, course grades were calculated by summing the evaluators' actual ratings (as discussed in the Design section) and compared with the MFRM-adjusted grades to quantify the degree of evaluator leniency occurring in this evaluation.

Measures created from the data analysis in the MFRM were converted to percentages using a common linear test-equating procedure involving the mean and standard deviation of the dataset. 15 To these percentages, student letter grades were assigned using the same traditional method used in 2007-2008 (ie, 90% = A, 80% - 89% = B, 70% - 79% = C, <70% = F). Letter grades calculated using the revised rubric and the MFRM then were compared to letter grades calculated using the previous rubric and course grading method.

In the analysis of the 2008-2009 data, the interrater reliability for the letter grades when comparing the 2 independent faculty evaluations for each presentation was 0.98 by Cohen's kappa. However, using the 3-facet MRFM revealed significant variation in grading. The interaction of evaluator leniency on student ability and item difficulty was significant, with a chi-square of p < 0.01. As well, the MFRM showed a reliability of 0.77, with a separation of 1.85 (ie, almost 2 groups of evaluators). The MFRM student ability measures were scaled to letter grades and compared with course letter grades. As a result, 2 B's became A's and so evaluator leniency accounted for a 2% change in letter grades (ie, 2 of 98 grades).

Validity and Grading

Explicit criterion-referenced standards for grading are recommended for higher evaluation validity. 3 , 16 - 18 The course coordinator completed 3 additional evaluations of a hypothetical student presentation rating the minimal criteria expected to describe each of an A, B, or C letter grade performance. These evaluations were placed with the other 196 evaluations (2 evaluators × 98 students) from 2008-2009 into the MFRM, with the resulting analysis report giving specific cutoff percentage scores for each letter grade. Unlike the traditional scoring method of assigning all items an equal weight, the MFRM ordered evaluation items from those more difficult for students (given more weight) to those less difficult for students (given less weight). These criterion-referenced letter grades were compared with the grades generated using the traditional grading process.

When the MFRM data were rerun with the criterion-referenced evaluations added into the dataset, a 10% change was seen with letter grades (ie, 10 of 98 grades). When the 10 letter grades were lowered, 1 was below a C, the minimum standard, and suggested a failing performance. Qualitative feedback from faculty evaluators agreed with this suggested criterion-referenced performance failure.

Measurement Model

Within modern test theory, the Rasch Measurement Model maps examinee ability with evaluation item difficulty. Items are not arbitrarily given the same value (ie, 1 point) but vary based on how difficult or easy the items were for examinees. The Rasch measurement model has been used frequently in educational research, 19 by numerous high-stakes testing professional bodies such as the National Board of Medical Examiners, 20 and also by various state-level departments of education for standardized secondary education examinations. 21 The Rasch measurement model itself has rigorous construct validity and reliability. 22 A 3-facet MFRM model allows an evaluator variable to be added to the student ability and item difficulty variables that are routine in other Rasch measurement analyses. Just as multiple regression accounts for additional variables in analysis compared to a simple bivariate regression, the MFRM is a multiple variable variant of the Rasch measurement model and was applied in this study using the Facets software (Linacre, Chicago, IL). The MFRM is ideal for performance-based evaluations with the addition of independent evaluator/judges. 8 , 23 From both yearly cohorts in this investigation, evaluation rubric data were collated and placed into the MFRM for separate though subsequent analyses. Within the MFRM output report, a chi-square for a difference in evaluator leniency was reported with an alpha of 0.05.

The presentation rubric was reliable. Results from the 2007-2008 analysis illustrated that the number of rating scale categories impacted the reliability of this rubric and that use of only 4 rating scale categories appeared best for measurement. While a 10-point Likert-like scale may commonly be used in patient care settings, such as in quantifying pain, most people cannot process more then 7 points or categories reliably. 24 Presumably, when more than 7 categories are used, the categories beyond 7 either are not used or are collapsed by respondents into fewer than 7 categories. Five-point scales commonly are encountered, but use of an odd number of categories can be problematic to interpretation and is not recommended. 25 Responses using the middle category could denote a true perceived average or neutral response or responder indecisiveness or even confusion over the question. Therefore, removing the middle category appears advantageous and is supported by our results.

With 2008-2009 data, the MFRM identified evaluator leniency with some evaluators grading more harshly while others were lenient. Evaluator leniency was indeed found in the dataset but only a couple of changes were suggested based on the MFRM-corrected evaluator leniency and did not appear to play a substantial role in the evaluation of this course at this time.

Performance evaluation instruments are either holistic or analytic rubrics. 26 The evaluation instrument used in this investigation exemplified an analytic rubric, which elicits specific observations and often demonstrates high reliability. However, Norman and colleagues point out a conundrum where drastically increasing the number of evaluation rubric items (creating something similar to a checklist) could augment a reliability coefficient though it appears to dissociate from that evaluation rubric's validity. 27 Validity may be more than the sum of behaviors on evaluation rubric items. 28 Having numerous, highly specific evaluation items appears to undermine the rubric's function. With this investigation's evaluation rubric and its numerous items for both presentation style and presentation content, equal numeric weighting of items can in fact allow student presentations to receive a passing score while falling short of the course objectives, as was shown in the present investigation. As opposed to analytic rubrics, holistic rubrics often demonstrate lower yet acceptable reliability, while offering a higher degree of explicit connection to course objectives. A summative, holistic evaluation of presentations may improve validity by allowing expert evaluators to provide their “gut feeling” as experts on whether a performance is “outstanding,” “sufficient,” “borderline,” or “subpar” for dimensions of presentation delivery and content. A holistic rubric that integrates with criteria of the analytic rubric (Figure ​ (Figure1) 1 ) for evaluators to reflect on but maintains a summary, overall evaluation for each dimension (delivery/content) of the performance, may allow for benefits of each type of rubric to be used advantageously. This finding has been demonstrated with OSCEs in medical education where checklists for completed items (ie, yes/no) at an OSCE station have been successfully replaced with a few reliable global impression rating scales. 29 - 31

Alternatively, and because the MFRM model was used in the current study, an items-weighting approach could be used with the analytic rubric. That is, item weighting based on the difficulty of each rubric item could suggest how many points should be given for that rubric items, eg, some items would be worth 0.25 points, while others would be worth 0.5 points or 1 point (Table ​ (Table2). 2 ). As could be expected, the more complex the rubric scoring becomes, the less feasible the rubric is to use. This was the main reason why this revision approach was not chosen by the course coordinator following this study. As well, it does not address the conundrum that the performance may be more than the summation of behavior items in the Figure ​ Figure1 1 rubric. This current study cannot suggest which approach would be better as each would have its merits and pitfalls.

Rubric Item Weightings Suggested in the 2008-2009 Data Many-Facet Rasch Measurement Analysis

An external file that holds a picture, illustration, etc.
Object name is ajpe171tbl2.jpg

Regardless of which approach is used, alignment of the evaluation rubric with the course objectives is imperative. Objectivity has been described as a general striving for value-free measurement (ie, free of the evaluator's interests, opinions, preferences, sentiments). 27 This is a laudable goal pursued through educational research. Strategies to reduce measurement error, termed objectification , may not necessarily lead to increased objectivity. 27 The current investigation suggested that a rubric could become too explicit if all the possible areas of an oral presentation that could be assessed (ie, objectification) were included. This appeared to dilute the effect of important items and lose validity. A holistic rubric that is more straightforward and easier to score quickly may be less likely to lose validity (ie, “lose the forest for the trees”), though operationalizing a revised rubric would need to be investigated further. Similarly, weighting items in an analytic rubric based on their importance and difficulty for students may alleviate this issue; however, adding up individual items might prove arduous. While the rubric in Figure ​ Figure1, 1 , which has evolved over the years, is the subject of ongoing revisions, it appears a reliable rubric on which to build.

The major limitation of this study involves the observational method that was employed. Although the 2 cohorts were from a single institution, investigators did use a completely separate class of PharmD students to verify initial instrument revisions. Optimizing the rubric's rating scale involved collapsing data from misuse of a 4-category rating scale (expanded by evaluators to 7 categories) by a few of the evaluators into 4 independent categories without middle ratings. As a result of the study findings, no actual grading adjustments were made for students in the 2008-2009 presentation course; however, adjustment using the MFRM have been suggested by Roberts and colleagues. 13 Since 2008-2009, the course coordinator has made further small revisions to the rubric based on feedback from evaluators, but these have not yet been re-analyzed with the MFRM.

The evaluation rubric used in this study for student performance evaluations showed high reliability and the data analysis agreed with using 4 rating scale categories to optimize the rubric's reliability. While lenient and harsh faculty evaluators were found, variability in evaluator scoring affected grading in this course only minimally. Aside from reliability, issues of validity were raised using criterion-referenced grading. Future revisions to this evaluation rubric should reflect these criterion-referenced concerns. The rubric analyzed herein appears a suitable starting point for reliable evaluation of PharmD oral presentations, though it has limitations that could be addressed with further attention and revisions.

ACKNOWLEDGEMENT

Author contributions— MJP and EGS conceptualized the study, while MJP and GES designed it. MJP, EGS, and GES gave educational content foci for the rubric. As the study statistician, MJP analyzed and interpreted the study data. MJP reviewed the literature and drafted a manuscript. EGS and GES critically reviewed this manuscript and approved the final version for submission. MJP accepts overall responsibility for the accuracy of the data, its analysis, and this report.

Assessment Rubrics

A rubric is commonly defined as a tool that articulates the expectations for an assignment by listing criteria, and for each criteria, describing levels of quality (Andrade, 2000; Arter & Chappuis, 2007; Stiggins, 2001). Criteria are used in determining the level at which student work meets expectations. Markers of quality give students a clear idea about what must be done to demonstrate a certain level of mastery, understanding, or proficiency (i.e., "Exceeds Expectations" does xyz, "Meets Expectations" does only xy or yz, "Developing" does only x or y or z). Rubrics can be used for any assignment in a course, or for any way in which students are asked to demonstrate what they've learned. They can also be used to facilitate self and peer-reviews of student work.

Rubrics aren't just for summative evaluation. They can be used as a teaching tool as well. When used as part of a formative assessment, they can help students understand both the holistic nature and/or specific analytics of learning expected, the level of learning expected, and then make decisions about their current level of learning to inform revision and improvement (Reddy & Andrade, 2010). 

Why use rubrics?

Rubrics help instructors:

Provide students with feedback that is clear, directed and focused on ways to improve learning.

Demystify assignment expectations so students can focus on the work instead of guessing "what the instructor wants."

Reduce time spent on grading and develop consistency in how you evaluate student learning across students and throughout a class.

Rubrics help students:

Focus their efforts on completing assignments in line with clearly set expectations.

Self and Peer-reflect on their learning, making informed changes to achieve the desired learning level.

Developing a Rubric

During the process of developing a rubric, instructors might:

Select an assignment for your course - ideally one you identify as time intensive to grade, or students report as having unclear expectations.

Decide what you want students to demonstrate about their learning through that assignment. These are your criteria.

Identify the markers of quality on which you feel comfortable evaluating students’ level of learning - often along with a numerical scale (i.e., "Accomplished," "Emerging," "Beginning" for a developmental approach).

Give students the rubric ahead of time. Advise them to use it in guiding their completion of the assignment.

It can be overwhelming to create a rubric for every assignment in a class at once, so start by creating one rubric for one assignment. See how it goes and develop more from there! Also, do not reinvent the wheel. Rubric templates and examples exist all over the Internet, or consider asking colleagues if they have developed rubrics for similar assignments. 

Sample Rubrics

Examples of holistic and analytic rubrics : see Tables 2 & 3 in “Rubrics: Tools for Making Learning Goals and Evaluation Criteria Explicit for Both Teachers and Learners” (Allen & Tanner, 2006)

Examples across assessment types : see “Creating and Using Rubrics,” Carnegie Mellon Eberly Center for Teaching Excellence and & Educational Innovation

“VALUE Rubrics” : see the Association of American Colleges and Universities set of free, downloadable rubrics, with foci including creative thinking, problem solving, and information literacy. 

Andrade, H. 2000. Using rubrics to promote thinking and learning. Educational Leadership 57, no. 5: 13–18. Arter, J., and J. Chappuis. 2007. Creating and recognizing quality rubrics. Upper Saddle River, NJ: Pearson/Merrill Prentice Hall. Stiggins, R.J. 2001. Student-involved classroom assessment. 3rd ed. Upper Saddle River, NJ: Prentice-Hall. Reddy, Y., & Andrade, H. (2010). A review of rubric use in higher education. Assessment & Evaluation In Higher Education, 35(4), 435-448.

Advertisement

Advertisement

Rubric formats for the formative assessment of oral presentation skills acquisition in secondary education

  • Development Article
  • Open access
  • Published: 20 July 2021
  • Volume 69 , pages 2663–2682, ( 2021 )

Cite this article

You have full access to this open access article

  • Rob J. Nadolski   ORCID: orcid.org/0000-0002-6585-0888 1 ,
  • Hans G. K. Hummel 1 ,
  • Ellen Rusman 1 &
  • Kevin Ackermans 1  

10k Accesses

4 Citations

1 Altmetric

Explore all metrics

Acquiring complex oral presentation skills is cognitively demanding for students and demands intensive teacher guidance. The aim of this study was twofold: (a) to identify and apply design guidelines in developing an effective formative assessment method for oral presentation skills during classroom practice, and (b) to develop and compare two analytic rubric formats as part of that assessment method. Participants were first-year secondary school students in the Netherlands ( n  = 158) that acquired oral presentation skills with the support of either a formative assessment method with analytic rubrics offered through a dedicated online tool (experimental groups), or a method using more conventional (rating scales) rubrics (control group). One experimental group was provided text-based and the other was provided video-enhanced rubrics. No prior research is known about analytic video-enhanced rubrics, but, based on research on complex skill development and multimedia learning, we expected this format to best capture the (non-verbal aspects of) oral presentation performance. Significant positive differences on oral presentation performance were found between the experimental groups and the control group. However, no significant differences were found between both experimental groups. This study shows that a well-designed formative assessment method, using analytic rubric formats, outperforms formative assessment using more conventional rubric formats. It also shows that higher costs of developing video-enhanced analytic rubrics cannot be justified by significant more performance gains. Future studies should address the generalizability of such formative assessment methods for other contexts, and for complex skills other than oral presentation, and should lead to more profound understanding of video-enhanced rubrics.

Similar content being viewed by others

criteria based assessment presentation

Viewbrics: A Technology-Enhanced Formative Assessment Method to Mirror and Master Complex Skills with Video-Enhanced Rubrics and Peer Feedback in Secondary Education

criteria based assessment presentation

Students’ and Teachers’ Perceptions of the Usability and Usefulness of the First Viewbrics-Prototype: A Methodology and Online Tool to Formatively Assess Complex Generic Skills with Video-Enhanced Rubrics (VER) in Dutch Secondary Education

criteria based assessment presentation

The Dilemmas of Formulating Theory-Informed Design Guidelines for a Video Enhanced Rubric

Avoid common mistakes on your manuscript.

Introduction

Both practitioners and scholars agree that students should be able to present orally (e.g., Morreale & Pearson, 2008 ; Smith & Sodano, 2011 ). Oral presentation involves the development and delivery of messages to the public with attention to vocal variety, articulation, and non-verbal signals, and with the aim to inform, self-express, relate to and persuade listeners (Baccarini & Bonfanti, 2015 ; De Grez et al., 2009a ; Quianthy, 1990 ). The current study is restricted to informative presentations (as opposed to persuasive presentations), as these are most common in secondary education. Oral presentation skills are complex generic skills of increasing importance for both society and education (Voogt & Roblin, 2012 ). However, secondary education seems to be in lack of instructional design guidelines for supporting oral presentation skills acquisition. Many secondary schools in the Netherlands are struggling with how to teach and assess students’ oral presentation skills, lack clear performance criteria for oral presentations, and fall short in offering adequate formative assessment methods that support the effective acquisition of oral presentation skills (Sluijsmans et al., 2013 ).

Many researchers agree that the acquisition and assessment of presentation skills should depart from a socio-cognitive perspective (Bandura, 1986 ) with emphasis on observation, practice, and feedback. Students practice new presentation skills by observing other presentations as modeling examples, then practice their own presentation, after which the feedback is addressed by adjusting their presentations towards the required levels. Evidently, delivering effective oral presentations requires much preparation, rehearsal, and practice, interspersed with good feedback, preferably from oral presentation experts. However, large class sizes in secondary schools of the Netherlands offer only limited opportunities for teacher-student interaction, and offer even fewer practice opportunities. Based on research on complex skill development and multimedia learning, it can be expected that video-enhanced analytic rubric formats best capture and guide oral presentation performance, since much non-verbal behavior cannot be captured in text (Van Gog et al., 2014 ; Van Merriënboer & Kirschner, 2013 ).

Formative assessment of complex skills

To support complex skills acquisition under limited teacher guidance, we will need more effective formative assessment methods (Boud & Molloy, 2013 ) based on proven instructional design guidelines. During skills acquisition students will perceive specific feedback as more adequate than non-specific feedback (Shute, 2008 ). Adequate feedback should inform students about (i) their task-performance, (ii) their progress towards intended learning goals, and (iii) what they should do to further progress towards those goals (Hattie & Timperly, 2007 ; Narciss, 2008 ). Students receiving specific feedback on criteria and performance levels will become equipped to improve oral presentation skills (De Grez et al., 2009a ; Ritchie, 2016 ). Analytic rubrics are therefore promising formats to provide specific feedback on oral presentations, because they can demonstrate the relations between subskills and explain the open-endedness of ideal presentations (through textual descriptions and their graphical design).

Ritchie ( 2016 ) showed that adding structure and self-assessment to peer- and teacher-assessments resulted in better oral presentation performance. Students were required to use analytic rubrics for self-assessment when following their (project-based) classroom education. In this way, they had ample opportunity for observing and reflecting on (good) oral presentations attributes, which was shown to foster acquisition of their oral presentation skills.

Analytic rubrics incorporate performance criteria to inform teachers and students when preparing oral presentation. Such rubrics support mental model formation, and enable adequate feedback provision by teachers, peers, and self (Brookhart & Chen, 2015 ; Jonsson & Svingby, 2007 ; Panadero & Jonsson, 2013 ). Such research is inconclusive about what are most effective formats and delivery media, but most studies dealt with analytic text-based rubrics delivered on paper. However, digital video-enhanced analytic rubrics are expected to be more effective for acquiring oral presentation skills, since many behavioral aspects refer to non-verbal actions and processes that can only be captured on video (e.g., body posture or use of voice during a presentation).

This study is situated within the Viewbrics project where video-modelling examples are integrated with analytic text-based rubrics (Ackermans et al., 2019a ). Video-modelling examples contain question prompts that illustrate behavior associated with (sub)skills performance levels in context, and are presented by young actors the target group can identify with. The question prompts require students to link behavior to performance levels, and build a coherent picture of the (sub)skills and levels. To the best of authors’ knowledge, there exist no previous studies on such video-enhanced analytic rubrics. The Viewbrics tool has been incrementally developed and validated with teachers and students to structure the formative assessment method in classroom settings (Rusman et al., 2019 ).

The purpose of our study is twofold. On the one hand, it investigates whether the application of evidence-based design guidelines results in a more effective formative assessment method in classroom. On the other hand, it investigates (within that method) whether video-enhanced analytic rubrics are more effective than text-based analytic rubrics.

Research questions

The twofold purpose of this study is stated by two research questions: (1) To what extent do analytic rubrics within formative assessment lead to better oral presentation performance? (the design-based part of this study); and (2) To what extent do video-enhanced analytic rubrics lead to better oral presentation performance (growth) than text-based analytic rubrics? (the experimental part of this study). We hypothesize that all students will improve their oral presentation performance in time, but that students in the experimental groups (receiving analytic rubrics designed according to proven design guidelines) will outperform a control group (receiving conventional rubrics) (Hypothesis 1). Furthermore, we expect the experimental group using video-enhanced rubrics to achieve more performance growth than the experimental group using text-based rubrics (Hypothesis 2).

After this introduction, the second section describes previous research on design guidelines that were applied to develop the analytic rubrics in the present study. The actual design, development and validation of these rubrics is described in “ Development of analytic rubrics tool ” section. “ Method ” section describes the experimental method of this study, whereas “ Results ” section reports its results. Finally, in the concluding “ Conclusions and discussion ” section, main findings and limitations of the study are discussed, and suggestions for future research are provided.

Previous research and design guidelines for formative assessment with analytic rubrics

Analytic rubrics are inextricably linked with assessment, either summative (for final grading of learning products) or formative (for scaffolding learning processes). They provide textual descriptions of skills’ mastery levels with performance indicators that describe concrete behavior for all constituent subskills at each mastery level (Allen & Tanner, 2006 ; Reddy, 2011 ; Sluijsmans et al., 2013 ) (see Figs.  1 and 2 in “ Development of analytic rubrics tool ” section for an example). Such performance indicators specify aspects of variation in the complexity of a (sub)skill (e.g., presenting for a small, homogeneous group as compared to presenting for a large heterogeneous group) and related mastery levels (Van Merriënboer & Kirschner, 2013 ). Analytic rubrics explicate criteria and expectations, can be used to check students’ progress, monitor learning, and diagnose learning problems, either by teachers, students themselves or by their peers (Rusman & Dirkx, 2017 ).

figure 1

Subskills for oral presentation assessment

figure 2

Specification of performance levels for criterium 4

Several motives for deploying analytic rubrics in education are distinguished. A review study by Panadero and Jonsson ( 2013 ) identified following motives: increasing transparency, reducing anxiety, aiding the feedback process, improving student self-efficacy, and supporting student self-regulation. Analytic rubrics also improve reliability among teachers when rating their students (Jonsson & Svingby, 2007 ). Evidence has shown that analytic rubrics can be utilized to enhance student performance and learning when they were used for formative assessment purposes in combination with metacognitive activities, like reflection and goal-setting, but research shows mixed results about their learning effectiveness (Panadero & Jonsson, 2013 ).

It remains unclear what is exactly needed to make their feedback effective (Reddy & Andrade, 2010 ; Reitmeier & Vrchota, 2009 ). Apparently, transparency of assessment criteria and learning goals (i.e., make expectations and criteria explicit) is not enough to establish effectiveness (Wöllenschläger et al., 2016 ). Several researchers stressed the importance of how and which feedback to provide with rubrics (Bower et al., 2011 ; De Grez et al., 2009b ; Kerby & Romine, 2009 ). We now continue this section by reviewing design guidelines for analytic rubrics we encountered in literature, and then specifically address what literature mentions about the added value of video-enhanced rubrics.

Design guidelines for analytic rubrics

Effective formative assessment methods for oral presentation and analytic rubrics should be based on proven instructional design guidelines (Van Ginkel et al., 2015 ). Table 1 presents an overview of (seventeen) guidelines on analytic rubrics we encountered in literature. Guidelines 1–4 inform us how to use rubrics for formative assessment; Guidelines 5–17 inform us how to use rubrics for instruction, with Guidelines 5–9 on a rather generic, meso level and Guidelines 10–17 on a more specific, micro level. We will now shortly describe them in relation to oral presentation skills.

Guideline 1: use analytic rubrics instead of rating scale rubrics if rubrics are meant for learning

Conventional rating-scale rubrics are easy to generate and use as they contain scores for each performance criterium (e.g., by a 5-point Likert scale). However, since each performance level is not clearly described or operationalized, rating can suffer from rater-subjectivity, and rating scales do not provide students with unambiguous feedback (Suskie, 2009 ). Analytic rubrics can address those shortcomings as they contain brief textual performance descriptions on all subskills, criteria, and performance levels of complex skills like presentation, but are harder to develop and score (Bargainnier, 2004 ; Brookhart, 2004 ; Schreiber et al., 2012 ).

Guideline 2: use self-assessment via rubrics for formative purposes

Analytic rubrics can encourage self-assessment and -reflection (Falchikov & Boud, 1989 ; Reitmeier & Vrchota, 2009 ), which appears essential when practicing presentations and reflecting on other presentations (Van Ginkel et al., 2017 ). The usefulness of self-assessment for oral presentation was demonstrated by Ritchie’s study ( 2016 ), but was absent in a study by De Grez et al. ( 2009b ) that used the same rubric.

Guideline 3: use peer-assessment via rubrics for formative purposes

Peer-feedback is more (readily) available than teacher-feedback, and can be beneficial for students’ confidence and learning (Cho & Cho, 2011 ; Murillo-Zamorano & Montanero, 2018 ), also for oral presentation (Topping, 2009 ). Students positively value peer-assessment if the circumstances guarantee serious feedback (De Grez et al., 2010 ; Lim et al., 2013 ). It can be assumed that using analytic rubrics positively influences the quality of peer-assessment.

Guideline 4: provide rubrics for usage by self, peers, and teachers as students appreciate rubrics

Students appreciate analytic rubrics because they support them in their learning, in their planning, in producing higher quality work, in focusing efforts, and in reducing anxiety about assignments (Reddy & Andrade, 2010 ), aspects of importance for oral presentation. While students positively perceive the use of peer-grading, the inclusion of teacher-grades is still needed (Mulder et al., 2014 ) and most valued by students (Ritchie, 2016 ).

Guidelines 5–9

Heitink et al. ( 2016 ) carried out a review study identifying five relevant prerequisites for effective classroom instruction on a meso-level when using analytic rubrics (for oral presentations): train teachers and students in using these rubrics, decide on a policy of their use in instruction, while taking school- and classroom contexts into account, and follow a constructivist learning approach. In the next section, it is described how these guidelines were applied to the design of this study’s classroom instruction.

Guidelines 10–17

Van Ginkel et al. ( 2015 ) review study presents a comprehensive overview of effective factors for oral presentation instruction in higher education on a micro-level. Although our research context is within secondary education, the findings from the aforementioned study seem very applicable as they were rooted in firmly researched and well-documented Instructional Design approaches. Their guidelines pertain to (a) instruction, (b) learning, and (c) assessment in the learning environment (Biggs, 2003 ). The next section describes how guidelines were applied to the design of this study’s online Viewbrics tool.

  • Video-enhanced rubrics

Early analytic rubrics for oral presentations were all text-based descriptions. This study assumes that such analytic rubrics may fall short when used for learning to give oral presentations, since much of the required performance refers to motoric activities, time-consecutive operations and processes that can hardly be captured in text (e.g., body posture or use of voice during a presentation). Text-based rubrics also have a limited capacity to convey contextualized and more ‘tacit’ behavioral aspects (O’Donnevan et al., 2004 ), since ‘tacit knowledge’ (or ‘knowing how’) is interwoven with practical activities, operations, and behavior in the physical world (Westera, 2011 ). Finally, text leaves more space for personal interpretation (of performance indicators) than video, which negatively influences mental model formation and feedback consistency (Lew et al., 2010 ).

We can therefore expect video-enhanced rubrics to overcome such restrictions, as they can integrate modelling examples with analytic text-based explanations. The video-modelling examples and its embedded question prompts can illustrate behavior associated with performance levels in context, and contain information in different modalities (moving images, sound). Video-enhanced rubrics foster learning from active observation of video-modelling examples (De Grez et al., 2014 ; Rohbanfard & Proteau, 2013 ), especially when combined with textual performance indicators. Looking at effects of video-modelling examples, Van Gog et al. ( 2014 ) found an increased task performance when the video-modelling example of an expert was also shown. De Grez et al. ( 2014 ) found comparable results for learning to give oral presentations. Teachers in training assessing their own performance with video-modelling examples appeared to overrate their performance less than without examples (Baecher et al., 2013 ). Research on mastering complex skills indicates that both modelling examples (in a variety of application contexts) and frequent feedback positively influence the learning process and skills' acquisition (Van Merriënboer & Kirschner, 2013 ). Video-modelling examples not only capture the ‘know-how’ (procedural knowledge), but also elicit the ‘know-why’ (strategic/decisive knowledge).

Development of analytic rubrics tool

This section describes how design guidelines from previous research were applied in the actual development of the rubrics in the Viewbrics tool for our study, and then presents the subskills and levels for oral presentation skills as were defined.

Application of design guidelines

The previous section already mentioned that analytic rubrics should be restricted to formative assessment (Guidelines 2 and 3), and that there are good reasons to assume that a combination of teacher-, peer-, and self-assessment will improve oral presentations (Guidelines 1 and 4). Teachers and students were trained in rubric-usage (Guidelines 5 and 7), whereas students were motivated for using rubrics (Guideline 7). As participating schools were already using analytic rubrics, one might assume their positive initial attitude. Although the policy towards using analytic rubrics might not have been generally known at the work floor, the participating teachers in our study were knowledgeable (Guideline 6). We carefully considered the school context, as (a representative set of) secondary schools in the Netherlands were part of the Viewbrics team (Guideline 8). The formative assessment method was embedded within project-based education (Guideline 9).

Within this study and on the micro-level of design, the learning objectives for the first presentation were clearly specified by lower performance levels, whereas advice on students' second presentation focused on improving specific subskills, that had been performed with insufficient quality during the first presentation (Guideline 10). Students carried out two consecutive projects of increasing complexity (Project 1, Project 2) with authentic tasks, amongst which the oral presentations (Guideline 11). Students were provided with opportunities to observe peer-models to increase their self-efficacy beliefs and oral presentation competence. In our study, only students that received video-enhanced rubrics could observe videos with peer-models before their first presentation (Guideline 12). Students were allowed enough opportunities to rehearse their oral presentations, to increase their presentation competence, and to decrease their communication apprehension. Within our study, only two oral presentations could be provided feedback, but students could rehearse as often as they wanted outside the classroom (Guideline 13). We ensured that feedback in the rubrics was of high quality, i.e., explicit, contextual, adequately timed, and of suitable intensity for improving students’ oral presentation competence. Both experimental groups in the study used digital analytic rubrics within the Viewbrics tool (both teacher-, peer-, and self-feedback). The control group received feedback by a more conventional rubric (rating scale), and could therefore not use the formative assessment and reflection functions (Guideline 14). The setup of the study implied that all peers play a major role during formative assessment in both experimental groups, because they formatively assessed each oral presentation using the Viewbrics tool (Guideline 15). The control group received feedback from their teacher. Both experimental groups used the Viewbrics tool to facilitate self-assessment (Guideline 16). The control group did not receive analytic progress data to inform their self-assessment. Specific goal-setting within self-assessment has been shown to positively stimulate oral presentation performance, to improve self-efficacy and reduce presentation anxiety (De Grez et al., 2009a ; Luchetti et al., 2003 ), so the Viewbrics tool was developed to support both specific goal-setting and self-reflection (Guideline 17).

Subskills and levels for oral presentation

Reddy and Andrade ( 2010 ) stress that rubrics should be tailored to the specific learning objectives and target groups. Oral presentations in secondary education (our study context) involve generating and delivering informative messages with attention to vocal variety, articulation, and non-verbal signals. In this context, message composition and message delivery are considered important (Quianthy, 1990 ). Strong arguments (‘logos’) have to be presented in a credible (‘ethos’) and exciting (‘pathos’) way (Baccarini & Bonfanti, 2015 ). Public speaking experts agree that there is not one right way to do an oral presentation (Schneider et al., 2017 ). There is agreement that all presenters need much practice, commitment, and creativity. Effective presenters do not rigorously and obsessively apply communication rules and techniques, as their audience may then perceive the performance as too technical or artificial. But all presentations should demonstrate sufficient mastery of elementary (sub)skills in an integrated manner. Therefore, such skills should also be practiced as a whole (including knowledge and attitudes), making the attainment of a skill performance level more than the sum of its constituent (sub)skills (Van Merriënboer & Kirschner, 2013 ). A validated instrument for assessing oral presentation performance is needed to help teachers assess and support students while practicing.

When we started developing rubrics with the Viewbrics tool (late 2016), there were no studies or validated measuring instruments for oral presentation performance in secondary education, although several schools used locally developed, non-validated assessment forms (i.e., conventional rubrics). For instance, Schreiber et al. ( 2012 ) had developed an analytic rubric for public speaking skills assessment in higher education, aimed at faculty members and students across disciplines. They identified eleven (sub)skills of public speaking, that could be subsumed under three factors (‘topic adaptation’, ‘speech presentation’ and ‘nonverbal delivery’, similar to logos-ethos-pathos).

Such previous work holds much value, but still had to be adapted and elaborated in the context of the current study. This study elaborated and evaluated eleven subskills that can be identified within the natural flow of an oral presentation and its distinctive features (See Fig.  1 for an overview of subskills, and Fig.  2 for a specification of performance levels for a specific subskill).

Between brackets are names of subskills as they appear in the dashboard of the Viewbrics tool (Fig.  3 ).

figure 3

Visualization of oral presentation progress and feedback in the Viewbrics tool

The upper part of Fig.  2 shows the scoring levels for first-year secondary school students for criterium 4 of the oral presentation assessment (four values, from more expert (4 points) to more novice (1 point), from right to left), an example of the conventional rating-scale rubrics. The lower part shows the corresponding screenshot from the Viewbrics tool, representing a text-based analytic rubric example. A video-enhanced analytic rubric example for this subskill provides a peer modelling the required behavior on expert level, with question prompts on selecting reliable and interesting materials. Performance levels were inspired by previous research (Ritchie, 2016 ; Schneider et al., 2017 ; Schreiber et al., 2012 ), but also based upon current secondary school practices in the Netherlands, and developed and tested with secondary school teachers and their students.

All eleven subskills are to be scored on similar four-point Likert scales, and have similar weights in determining total average scores. Two pilot studies tested the usability, validity and reliability of the assessment tool (Rusman et al., 2019 ). Based on this input, the final rubrics were improved and embedded in a prototype of the online Viewbrics tool, and used for this study. The formative assessment method consisted of six steps: (1) study the rubric; (2) practice and conduct an oral presentation; (3) conduct a self-assessment; (4) consult feedback from teacher and peers; (5) Reflect on feedback; and (6) select personal learning goal(s) for the next oral presentation.

After the second project (Project 2), that used the same setup and assessment method as for the first project, students in experimental groups could also see their visualized progress in the ‘dashboard’ of the Viewbrics tool (see Fig.  3 , with English translations provided between brackets), by comparing performance on their two project presentations during the second reflection assignment. The dashboard of the tool shows progress (inner circles), with green reflecting improvement on subskills, blue indicating constant subskills, and red indicating declining subskills. Feedback is provided by emoticons with text. Students’ personal learning goals after reflection are shown under ‘Mijn leerdoelen’ [My learning goals].

The previous sections described how design guidelines for analytic rubrics from literature (“ Previous research and design guidelines for formative assessment with analytic rubrics ” section) were applied in a formative assessment method with analytic rubrics (“ Development of analytic rubrics tool ” section). “ Method ” section describes this study’s research design for comparing rubric formats.

Research design of the study

All classroom scenarios followed the same lesson plan and structure for project-based instruction, and consisted of two projects with specific rubric feedback provided in between. Both experimental groups used the same formative assessment method with validated analytic rubrics, but differed on the analytic rubric format (text-based, video-enhanced). The students of the control group did not use such a formative assessment method, and only received teacher-feedback (via a conventional rating-scale rubric that consisted of a standard form with attention points for presentations, without further instructions) on these presentations. All three scenarios required similar time investments for students. School classes (six) were randomly assigned to conditions (three), so students from the same class were in the same condition. Figure  4 graphically depicts an overview of the research design of the study.

figure 4

Research design overview

A repeated-measures mixed-ANOVA on oral presentation performance (growth) was carried out to analyze data, with rubric-format (three conditions) as between-groups factor and repeated measures (two moments) as within groups factor. All statistical data analyses were conducted with SPSS version 24.

Participants

Participants were first-year secondary school students (all within the 12–13 years range) from two Dutch schools, with participants equally distributed over schools and conditions ( n  = 166, with 79 girls and 87 boys). Classes were randomly allocated to conditions. Most participants completed both oral presentations ( n  = 158, so an overall response rate of 95%). Data were collected (almost) equally from the video-enhanced rubrics condition ( n  = 51), text-based condition ( n  = 57), and conventional rubrics (control) condition ( n  = 50).

A related study within the same context and participants (Ackermans et al., 2019b ), analyzed the concept maps elicited from participants to reveal that their mental models (indicating mastery levels) for oral presentation across conditions were similar. From that finding we can conclude that students possessed similar mental models for presentation skills before starting the projects. Results from the online questionnaire (“ Anxiety, preparedness, and motivation ” section) reveal that students in experimental groups did not differ in anxiety, preparedness and motivation before their first presentation. Together with the teacher assessments of similarity of classes, we can assume similarity of students across conditions at the start of the experiment.

Materials and procedure

Teachers from both schools worked closely together in guaranteeing similar instruction and difficulty levels for both projects (Project 1, Project 2). Schools agreed to follow a standardized lesson plan for both projects and their oral presentation tasks. Core team members then developed (condition-specific) materials for teacher- and student workshops on how to use rubrics and provide instructions and feedback (Guidelines 5 and 7). This also assured that similar measures were taken for potential problems with anxiety, preparedness and motivation. Teachers received information about (condition-specific) versions of the Viewbrics tool (see “ Development of analytic rubrics tool ” section). The core team consisted of three researchers and three (project) teachers, with one teacher also supervising the others. The teacher workshops were given by the supervising teacher and two researchers before starting recruitment of students.

Teachers estimated similarity of all six classes with respect to students’ prior presentation skills before starting the first project. All classes were informed by an introduction letter from the core team and their teachers. Participation in this study was voluntary. Students and their parents/caretakers were informed about 4 weeks before the start of the first project, and received information on research-specific activities, time-investment and -schedule. Parents/caretakers signed, on behalf of their minors of age, an informed consent form before the study started. All were informed that data would be anonymized for scientific purposes, and that students could withdraw at any time without giving reasons.

School classes were randomly assigned to conditions. Students of experimental groups were informed that the usability of the Viewbrics tool for oral presentation skills acquisition were investigated, but were left unaware of different rubric formats. Students of the control group were informed that their oral presentation skills acquisition was investigated. From all students, concept maps about oral presentation were elicited (reflecting their mental model and mastery level). Students participated in workshops (specific for their condition and provided by their teacher) on how to use rubrics and provide peer-feedback (all materials remained available throughout the study).

Before giving their presentations on Project 1, students filled in the online questionnaire via LimeSurvey. Peers and teachers in experimental groups provided immediate feedback on given presentations, and students immediately had to self-assess their own presentations (step 3 of the assessment method). Subsequently, students could view the feedback and ratings given by their teacher and peers through the tool (step 4), were asked to reflect on this feedback (step 5), and to choose specific goals for their second oral presentation (step 6). In the control group, students directly received teachers’ feedback (verbally) after completing their presentation, but did not receive any reflection assignment. Control group students used a standard textual form with attention points (conventional rating-scale rubrics). After giving their presentations on the second project, students in the experimental groups got access to the dashboard of the Viewbrics tool (see “ Development of analytic rubrics tool ” section) to see their progress on subskills. About a week after the classes had ended, some semi-structured interviews were carried out by one of the researchers. Finally, one of the researchers functioned as a hotline for teachers in case of urgent questions during the study, and randomly observed some of the lessons.

Measures and instruments

Oral performance scores on presentations were measured by both teachers and peers. A short online questionnaire (with 6 items) was administered to students just before their first oral presentation at the end of Project 1 (see Fig.  4 ). Interviews were conducted with both teachers and students at the end of the intervention to collect more qualitative data on subjective perceptions.

Oral presentation performance

Students’ oral presentation performance progress was measured via comparison of the oral presentation performance scores on both oral presentations (with three months in between). Both presentations were scored by teachers using the video-enhanced rubric in all groups (half of the score in experimental groups, full score for control group). For participants in both experimental groups, oral presentation performance was also scored by peers and self, using the specific rubric-version (either video-enhanced or text-based) (other half of the score). For each of the (eleven) subskills, between 1 point (novice level) and 4 points (expert level) could be earned, with a maximum of 44 points for total performance score. For participants in the control group, the same scale applied but no scores were given by peers nor self. The inter-rater reliability of assessments between teachers and peers was a Cohen’s Kappa = 0.74 which is acceptable.

Anxiety, preparedness, and motivation

Just before presenting, students answered the short questionnaire with five-point Likert scores (from 0 = totally disagree to 4 = totally agree) as additional control for potential differences in anxiety, preparedness and motivation, since especially these factors might influence oral presentation performance (Reddy & Andrade, 2010 ). Notwithstanding this, teachers were the major source to control for similarity of conditions with respect to dealing with presentation anxiety, preparedness and motivation. Two items for anxiety were: “I find it exciting to give a presentation” and “I find it difficult to give a presentation”, a subscale that appeared to have a satisfactory internal reliability with a Cronbach’s Alpha = 0.90. Three items for preparedness were: “I am well prepared to give my presentation”, “I have often rehearsed my presentation”, and “I think I’ve rehearsed my presentation enough”, a subscale that appeared to have a satisfactory Cronbach’s Alpha = 0.75. The item for motivation was: “I am motivated to give my motivation”. Unfortunately, the online questionnaire was not administered within the control group, due to unforeseen circumstances.

Semi-structured interviews with teachers (six) and students (thirty) were meant to gather qualitative data on the practical usability and usefulness of the Viewbrics tool. Examples of questions are: “Have you encountered any difficulties in using the Viewbrics online tool? If any, could you please mention which one(s)” (both students of experimental groups and teachers); “Did the feedback help you to improve your presentation skills? If not, what feedback do you need to improve your presentation skills?” (just students); “How do you evaluate the usefulness of formative assessment?” (both students and teachers); “Would you like to organize things differently in applying formative assessment as during this study? If so, what would you like to organize different?” (just teachers); “How much time did you spend on providing feedback? Did you need more or less time than before?” (just teachers).

Interviews with teachers and students revealed that the reported rubrics approach was easy to use and useful within the formative assessment method. Project teachers could easily stick to the lessons plans as agreed upon in advance. However, project teachers regarded the classroom scenarios as relatively time-consuming. They expected that for some other schools it might be challenging to follow the Viewbrics approach. None of the project teachers had to consult the hotline during the study, and no deviations from the lesson plans were observed by the researchers.

Most important results on the performance measures and questionnaire are presented and compared between conditions.

A mixed ANOVA, with oral presentation performance as within-subjects factor (two scores) and rubric format as between-subjects factor (three conditions), revealed an overall and significant improvement of oral presentation performance over time, with F (1, 157) = 58.13, p  < 0.01, η p 2  = 0.27. Significant differences over time were also found between conditions, with F (2, 156) = 17.38, p  < 0.01, η p 2  = 0.18. Tests of between-subjects effects showed significant differences between conditions, with F (2, 156) = 118.97, p  < 0.01, η p 2  = 0.59, and both experimental groups outperforming the control group as expected (so we could accept H1). However, only control group students showed significantly progress on performance scores over time (at the 0.01 level). At both measures, no significant differences between experimental groups were found as was expected (so we had to reject H2). For descriptives of group averages (over time) see Table 2 .

A post-hoc analysis, using multiple pairwise comparisons with Bonferroni correction, confirms that experimental groups significantly (with p  < 0.01 level) outperform the control group at both moments in time, and that both experimental groups not to differ significantly at both measures. Regarding performance progress over time, only the control group shows significant growth (again with p < 0.01). The difference between experimental groups in favour of video-enhanced rubrics did ‘touch upon’ significance ( p  = 0.053), but formally H2 had to be rejected. This finding however is a promising trend to be further explored with larger numbers of participants.

An independent t-test comparing the similarity of participants in both experimental groups before their first presentation for anxiety, preparedness, motivation showed no difference, with t (1,98) = 1.32 and p  = 0.19 for anxiety, t (1,98) = − 0.14 and p  = 0.89 for preparedness, and t (1,98) = − 1.24 and p  = 0.22 for motivation (see Table 3 for group averages).

As mentioned in the previous section (interviews with teachers), it was assessed by teachers that presentation anxiety, preparedness and motivation in the control group were no different from both experimental groups. It can therefore be assumed that all groups were similar regarding presentation anxiety, preparedness and motivation before presenting, and that these factors did not confound oral presentation results. There are missing questionnaire data from 58 respondents: Video-enhanced (one respondent), Text-based (seven respondents), and Control group (fifty respondents), respectively.

Conclusions and discussion

The first purpose was to study if applying evidence-informed design guidelines in the development of formative assessment with analytic rubrics supports oral presentation performance of first-year secondary school students in the Netherlands. Students that used such validated rubrics indeed outperform students using common rubrics (so H1 could be accepted). This study has demonstrated that the design guidelines can also be effectively applied and used for secondary education, which makes them more generic. The second purpose was to study if video-enhanced rubrics would be more beneficial to oral presentation skills acquisition when compared to text-based rubrics, but we did not find significant differences here (so H2 had to be rejected). However, post-hoc analysis shows that the growth on performance scores over time indeed seems higher when using video-enhanced rubrics, a promising difference that is ‘only marginally’ significant. Preliminary qualitative findings from the interviews point out that the Viewbrics tool can be easily integrated into classroom instruction and appears usable for the target audiences (both teachers and students), although teachers state it is rather time-consuming to conform to all guidelines.

All students had prior experience with oral presentations (from primary schools) and relatively high oral presentation scores at the start of the study, so there remained limited room for improvement between their first and second oral presentation. Participants in the control group scored relatively low on their first presentation, so had more room for improvement during the study. In addition, the somewhat more difficult content of the second project (Guideline 11) might have slightly reduced the quality of the second oral presentation. Also, more intensive training, additional presentations and their assessments might have demonstrated more added value of the analytic rubrics. Learning might have occurred, since adequate mental models of skills are not automatically applied during performance (Ackermans et al., 2019b ).

A first limitation (and strength at the same time) of this study was its contextualization within a specific subject domain and educational sector over a longer period of time, which implies we cannot completely exclude some influence of confounding factors. A second limitation is that the Viewbrics tool has been specifically designed for formative assessment, and not meant for summative assessment purposes. Although our study revealed the inter-rater reliability of our rubrics to be satisfactory (see “ Measures and instruments ” section), it is likely to become lower and less suitable when compared to more traditional summative assessment methods (Jonsson & Svinby, 2007 ). Thirdly, just having a reliable rubric bears no evidence for content-validity (representativeness, fidelity of scoring structure to the construct domain) or generalizability to other domains and educational sectors (Jonsson & Svinby, 2007 ). Fourth, one might criticize the practice-based research design of our study, as this is less-controlled than laboratory studies. We acknowledge that the application of more unobtrusive and objective measures to better understand the complex relationship between instructional characteristics, student characteristics and cognitive learning processes and strategies could best be achieved in a combination of more laboratory research and more practice-based research. Notwithstanding some of these issues, we have deliberately chosen for design-based research and evidence-informed findings from educational practice.

Future research could examine the Viewbrics approach to formative assessment for oral presentation skills in different contexts (other subject matters and educational sectors). The Viewbrics tool could be extended with functions for self-assessment (e.g., record and replay one's own presentations), for coping with speech anxiety (Leary & Kowalski, 1995 ), and goal-setting (De Grez et al., 2009a ). As this is a first study on video-enhanced rubrics, more fine-grained and fundamental research into beneficial effects on cognitive processes is needed, also to justify the additional development costs. Development of video-enhanced rubrics is more costly when compared to text-based rubrics. Another line of research might be directed to develop multiple measures for objectively determining oral presentation competence, for example using sensor-based data gathering and algorithms for data-gathering, guidance, and meaningful interpretation (Schneider et al., 2017 ), or direct measures of cortisol levels for speaking anxiety (Bartholomay & Houlihan, 2016 ; Merz & Wolf, 2015 ). Other instructional strategies might be considered, for example repeated practice of the same oral presentation might result in performance improvement, as has been suggested by Ritchie ( 2016 ). This also would enable to downsize the importance of presentation content and to put more focus on presentation delivery. The importance of finding good instructional technologies to support complex oral presentation skills will remain of importance throughout the twenty-first century and beyond.

Ackermans, K., Rusman, E., Brand-Gruwel, S., & Specht, M. (2019a). Solving instructional design dilemmas to develop Video-Enhanced Rubrics with modeling examples to support mental model development of complex skills: The Viewbrics-project use case. Educational Technology Research & Development, 67 (4), 993–1002.

Google Scholar  

Ackermans, K., Rusman, E., Nadolski, R. J., Brand-Gruwel, S., & Specht, M. (2019b). Video-or text-based rubrics: What is most effective for mental model growth of complex skills within formative assessment in secondary schools? Computers in Human Behavior, 101 , 248–258.

Allen, D., & Tanner, K. (2006). Rubrics: Tools for making learning goals and evaluation criteria explicit for both teachers and learners. CBE Life Sciences Education, 5 (3), 197–203.

Baccarini, C., & Bonfanti, A. (2015). Effective public speaking: A conceptual framework in the corporate-communication field. Corporate Communications, 20 (3), 375–390.

Baecher, L., Kung, S. C., Jewkes, A. M., & Rosalia, C. (2013). The role of video for self-evaluation in early field experiences. Teacher Teaching Education, 36 , 189–197.

Bandura, A. (1986). Social foundations of thought and action: A social cognitive theory . Prentice-Hall.

Bargainnier, S. (2004). Fundamentals of rubrics. In D. Apple (Ed.), Faculty guidebook (pp. 75–78). Pacific Crest.

Bartholomay, E. M., & Houlihan, D. D. (2016). Public Speaking Anxiety Scale: Preliminary psychometric data scale validation. Personality Individual Differences, 94 , 211–215.

Biggs, J. (2003). Teaching for quality learning at University . Society for Research in Higher Education and Open University Press.

Boud, D., & Molloy, E. (2013). Rethinking models of feedback for learning: The challenge of design. Assessment & Evaluation in Higher Education, 38 (6), 698–712.

Bower, M., Cavanagh, M., Moloney, R., & Dao, M. (2011). Developing communication competence using an online Video Reflection system: Pre-service teachers’ experiences. Asia-Pacific Journal of Teacher Education, 39 (4), 311–326.

Brookhart, S. M. (2004). Assessment theory for college classrooms. New Directions for Teaching and Learning, 100 , 5–14.

Brookhart, S. M., & Chen, F. (2015). The quality and effectiveness of descriptive rubrics. Educational Review, 67 (3), 343–368.

Cho, Y. H., & Cho, K. (2011). Peer reviewers learn from giving comments. Instructional Science, 39 (5), 629–643.

De Grez, L., Valcke, M., & Berings, M. (2010). Peer assessment of oral presentation skills. Procedia Social and Behavioral Sciences, 2 (2), 1776–1780.

De Grez, L., Valcke, M., & Roozen, I. (2009a). The impact of goal orientation, self-reflection and personal characteristics on the acquisition of oral presentation skills. European Journal of Psychology of Education, 24 (3), 293–306.

De Grez, L., Valcke, M., & Roozen, I. (2009b). The impact of an innovative instructional intervention on the acquisition of oral presentation skills in higher education. Computers & Education, 53 (1), 112–120.

De Grez, L., Valcke, M., & Roozen, I. (2014). The differential impact of observational learning and practice-bases learning on the development of oral presentation skills in higher education. Higher Education Research Development, 33 (2), 256–271.

Falchikov, N., & Boud, D. (1989). Student self-assessment in higher education: A meta-analysis. Review of Educational Research, 59 (4), 395–430.

Hattie, J., & Timperley, H. (2007). The power of feedback. Review of Educational Research, 77 (1), 81–112.

Heitink, M. C., Van der Kleij, F. M., Veldkamp, B. P., & Schildkamp, K. (2016). A systematic review of prerequisites for implementing assessment for learning in classroom practice. Educational Research Review, 17 , 50–62.

Jonsson, A., & Svingby, G. (2007). The use of scoring rubrics: Reliability, validity and educational consequences. Educational Research Review, 2 (2), 130–144.

Kerby, D., & Romine, J. (2009). Develop oral presentation skills through accounting curriculum design and course-embedded assessment. Journal of Education for Business, 85 (3), 172–179.

Leary, M. R., & Kowalski, R. M. (1995). Social anxiety . Guilford Press.

Lew, M. D. N., Alwis, W. A. M., & Schmidt, H. G. (2010). Accuracy of students’ self-assessment and their beliefs about its utility. Assessment and Evaluation in Higher Education, 35 (2), 135–156.

Lim, B. T., Moriarty, H., Huthwaite, M., Gray, L., Pullon, S., & Gallagher, P. (2013). How well do medical students rate and communicate clinical empathy? Medical Teacher, 35 , 946–951.

Luchetti, A. E., Phipss, G. L., & Behnke, R. R. (2003). Trait anticipatory public speaking anxiety as a function of self-efficacy expectations and self-handicapping strategies. Communication Research Reports, 20 (4), 348–356.

Merz, C. J., & Wolf, O. T. (2015). Examination of cortisol and state anxiety at an academic setting with and without oral presentation. The International Journal on the Biology of Stress, 18 (1), 138–142.

Morreale, S. P., & Pearson, J. C. (2008). Why communication education is important: Centrality of discipline in the 21st century. Communication Education, 57 , 224–240.

Mulder, R. A., Pearce, J. M., & Baik, C. (2014). Peer review in higher education: Student perceptions before and after participation. Active Learning in Higher Education, 15 (2), 157–171.

Murillo-Zamorano, L. R., & Montanero, M. (2018). Oral presentations in higher education: A comparison of the impact of peer and teacher feedback. Assessment & Evaluation in Higher Education, 43 (1), 138–150.

Narciss, S. (2008). Feedback strategies for interactive learning tasks. In J. M. Spector, M. D. Merrill, J. J. G. van Merrienboer, & M. P. Driscoll (Eds.), Handbook of research on educational communications and technology (3rd ed., pp. 125–144). Lawrence Erlbaum Associates.

O’Donnevan, B., Price, M., Rust, C., & Donovan, B. O. (2004). Teaching in higher education know what I mean? Enhancing student understanding of assessment standards and criteria. Teacher Higher Education, 9 (3), 325–335.

Panadero, E., & Jonsson, A. (2013). The use of scoring rubrics for formative assessment purposed revisited: A review. Educational Research Review, 9 , 129–144.

Quianthy, R. L. (1990). Communication is life: Essential college sophomore speaking and listening competencies . National Communication Association.

Reddy, Y. M. (2011). Design and development of rubrics to improve assessment outcomes. Quality Assurance in Education, 19 (1), 84–104.

Reddy, Y. M., & Andrade, H. (2010). A review of rubric use in higher education. Assessment & Evaluation in Higher Education, 35 (4), 435–448.

Reitmeier, C. A., & Vrchota, D. A. (2009). Self-assessment of oral communication presentations in food science and nutrition. J. of Food Science Education, 8 (4), 88–92.

Ritchie, S. M. (2016). Self-assessment of video-recorded presentations: Does it improve skills? Active Learning in Higher Education, 17 (3), 207–221.

Rohbanfard, H., & Proteau, L. (2013). Live versus video presentation techniques in the observational learning of motor skills. Trends Neuroscience Education, 2 , 27–32.

Rusman, E., & Dirkx, K. (2017). Developing rubrics to assess complex (generic) skills in the classroom: How to distinguish skills’ mastery Levels? Practical Assessment, Research & Evaluation. https://doi.org/10.7275/xfp0-8228

Article   Google Scholar  

Rusman, E., Nadolski, R. J., & Ackermans, K. (2019). Students’ and teachers’ perceptions of the usability and usefulness of the first Viewbrics-prototype: A methodology and online tool to formatively assess complex generic skills with video-enhanced rubrics in Dutch secondary education. In S. Draaijer, D. Joosten-ten Brinke, E. Ras (Eds), Technology enhanced assessment. TEA 2018. Communications in computer and information science (Vol. 1014, pp. 27–41). Springer, Cham.

Schneider, J., Börner, D., Van Rosmalen, P., & Specht, M. (2017). Presentation trainer: What experts and computers can tell about your nonverbal communication. Journal of Computer Assisted Learning, 33 (2), 164–177.

Schreiber, L. M., Paul, G. D., & Shibley, L. R. (2012). The development and test of the public speaking competence rubric. Communication Education, 61 (3), 205–233.

Shute, V. J. (2008). Focus on formative feedback. Review of Educational Research, 78 (1), 153–189.

Sluijsmans, D., Joosten-ten Brinke, D., & Van der Vleuten, C. (2013). Toetsen met leerwaarde [Assessments with value for learning] . The Hague, The Netherlands: NWO. Retrieved from https://sluijsmans.net/wp-content/uploads/2019/02/Toetsen-met-leerwaarde.pdf

Smith, C. M., & Sodano, T. M. (2011). Integrating lecture capture as a teaching strategy to improve student presentation skills through self-assessment. Active Learning in Higher Education, 12 , 151–162.

Suskie, L. (2009). Assessing student learning: A common sense guide (2nd ed.). Wiley.

Topping, K. (2009). Peer assessment. Theory into Practice, 48 (1), 20–27.

Van Ginkel, S., Gulikers, J., Biemans, H., & Mulder, M. (2015). Towards a set of design principles for developing oral presentation competence: A synthesis of research in higher education. Educational Research Review, 14 , 62–80.

Van Ginkel, S., Laurentzen, R., Mulder, M., Mononen, A., Kyttä, J., & Kortelainen, M. J. (2017). Assessing oral presentation performance: Designing a rubric and testing its validity with an expert group. Journal of Applied Research in Higher Education, 9 (3), 474–486.

Van Gog, T., Verveer, I., & Verveer, L. (2014). Learning from video modeling examples: Effects of seeing the human model’s face. Computers and Education, 72 , 323–327.

Van Merriënboer, J. J. G., & Kirschner, P. A. (2013). Ten steps to complex learning (2nd ed.). Lawrence Erlbaum.

Voogt, J., & Roblin, N. P. (2012). A comparative analysis of international frameworks for 21st century competences: Implications for national curriculum policies. Journal of Curriculum Studies, 44 , 299–321.

Westera, W. (2011). On the changing nature of learning context: Anticipating the virtual extensions of the world. Educational Technology and Society, 14 , 201–212.

Wöllenschläger, M., Hattie, J., Machts, N., Möller, J., & Harms, U. (2016). What makes rubrics effective in teacher-feedback? Transparency of learning goals is not enough. Contemporary Educational Psychology, 44–45 , 1–11.

Download references

Acknowledgements

Authors would like to thank the reviewers for their constructive comments on our paper and all students and teachers that participated in this study as well as the management from the participating schools.

The Viewbrics-project is funded by the practice-oriented research program of the Netherlands Initiative for Education Research (NRO), part of The Netherlands Organization for Scientific Research (NWO), under Grant Number: 405-15-550.

Author information

Authors and affiliations.

Faculty of Educational Sciences, Open University of the Netherlands, Valkenburgerweg 177, 6419 AT, Heerlen, The Netherlands

Rob J. Nadolski, Hans G. K. Hummel, Ellen Rusman & Kevin Ackermans

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Rob J. Nadolski .

Ethics declarations

Conflict of interest.

The authors declare that they have no conflict of interest.

Ethical approval

This research has been approved by the ethics committee of the author's institution (U2017/05559/HVM).

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Nadolski, R.J., Hummel, H.G.K., Rusman, E. et al. Rubric formats for the formative assessment of oral presentation skills acquisition in secondary education. Education Tech Research Dev 69 , 2663–2682 (2021). https://doi.org/10.1007/s11423-021-10030-7

Download citation

Accepted : 03 July 2021

Published : 20 July 2021

Issue Date : October 2021

DOI : https://doi.org/10.1007/s11423-021-10030-7

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Digital rubrics
  • Analytic rubrics
  • Oral presentation skills
  • Formative assessment method
  • Find a journal
  • Publish with us
  • Track your research

Academic Development Centre

Oral presentations

Using oral presentations to assess learning

Introduction.

Oral presentations are a form of assessment that calls on students to use the spoken word to express their knowledge and understanding of a topic. It allows capture of not only the research that the students have done but also a range of cognitive and transferable skills.

Different types of oral presentations

A common format is in-class presentations on a prepared topic, often supported by visual aids in the form of PowerPoint slides or a Prezi, with a standard length that varies between 10 and 20 minutes. In-class presentations can be performed individually or in a small group and are generally followed by a brief question and answer session.

Oral presentations are often combined with other modes of assessment; for example oral presentation of a project report, oral presentation of a poster, commentary on a practical exercise, etc.

Also common is the use of PechaKucha, a fast-paced presentation format consisting of a fixed number of slides that are set to move on every twenty seconds (Hirst, 2016). The original version was of 20 slides resulting in a 6 minute and 40 second presentation, however, you can reduce this to 10 or 15 to suit group size or topic complexity and coverage. One of the advantages of this format is that you can fit a large number of presentations in a short period of time and everyone has the same rules. It is also a format that enables students to express their creativity through the appropriate use of images on their slides to support their narrative.

When deciding which format of oral presentation best allows your students to demonstrate the learning outcomes, it is also useful to consider which format closely relates to real world practice in your subject area.

What can oral presentations assess?

The key questions to consider include:

  • what will be assessed?
  • who will be assessing?

This form of assessment places the emphasis on students’ capacity to arrange and present information in a clear, coherent and effective way’ rather than on their capacity to find relevant information and sources. However, as noted above, it could be used to assess both.

Oral presentations, depending on the task set, can be particularly useful in assessing:

  • knowledge skills and critical analysis
  • applied problem-solving abilities
  • ability to research and prepare persuasive arguments
  • ability to generate and synthesise ideas
  • ability to communicate effectively
  • ability to present information clearly and concisely
  • ability to present information to an audience with appropriate use of visual and technical aids
  • time management
  • interpersonal and group skills.

When using this method you are likely to aim to assess a combination of the above to the extent specified by the learning outcomes. It is also important that all aspects being assessed are reflected in the marking criteria.

In the case of group presentation you might also assess:

  • level of contribution to the group
  • ability to contribute without dominating
  • ability to maintain a clear role within the group.

See also the ‘ Assessing group work Link opens in a new window ’ section for further guidance.

As with all of the methods described in this resource it is important to ensure that the students are clear about what they expected to do and understand the criteria that will be used to asses them. (See Ginkel et al, 2017 for a useful case study.)

Although the use of oral presentations is increasingly common in higher education some students might not be familiar with this form of assessment. It is important therefore to provide opportunities to discuss expectations and practice in a safe environment, for example by building short presentation activities with discussion and feedback into class time.

Individual or group

It is not uncommon to assess group presentations. If you are opting for this format:

  • will you assess outcome or process, or both?
  • how will you distribute tasks and allocate marks?
  • will group members contribute to the assessment by reporting group process?

Assessed oral presentations are often performed before a peer audience - either in-person or online. It is important to consider what role the peers will play and to ensure they are fully aware of expectations, ground rules and etiquette whether presentations take place online or on campus:

  • will the presentation be peer assessed? If so how will you ensure everyone has a deep understanding of the criteria?
  • will peers be required to interact during the presentation?
  • will peers be required to ask questions after the presentation?
  • what preparation will peers need to be able to perform their role?
  • how will the presence and behaviour of peers impact on the assessment?
  • how will you ensure equality of opportunities for students who are asked fewer/more/easier/harder questions by peers?

Hounsell and McCune (2001) note the importance of the physical setting and layout as one of the conditions which can impact on students’ performance; it is therefore advisable to offer students the opportunity to familiarise themselves with the space in which the presentations will take place and to agree layout of the space in advance.

Good practice

As a summary to the ideas above, Pickford and Brown (2006, p.65) list good practice, based on a number of case studies integrated in their text, which includes:

  • make explicit the purpose and assessment criteria
  • use the audience to contribute to the assessment process
  • record [audio / video] presentations for self-assessment and reflection (you may have to do this for QA purposes anyway)
  • keep presentations short
  • consider bringing in externals from commerce / industry (to add authenticity)
  • consider banning notes / audio visual aids (this may help if AI-generated/enhanced scripts run counter to intended learning outcomes)
  • encourage students to engage in formative practice with peers (including formative practice of giving feedback)
  • use a single presentation to assess synoptically; linking several parts / modules of the course
  • give immediate oral feedback
  • link back to the learning outcomes that the presentation is assessing; process or product.

Neumann in Havemann and Sherman (eds., 2017) provides a useful case study in chapter 19: Student Presentations at a Distance, and Grange & Enriquez in chapter 22: Moving from an Assessed Presentation during Class Time to a Video-based Assessment in a Spanish Culture Module.

Diversity & inclusion

Some students might feel more comfortable or be better able to express themselves orally than in writing, and vice versa . Others might have particular difficulties expressing themselves verbally, due for example to hearing or speech impediments, anxiety, personality, or language abilities. As with any other form of assessment it is important to be aware of elements that potentially put some students at a disadvantage and consider solutions that benefit all students.

Academic integrity

Oral presentations present relative low risk of academic misconduct if they are presented synchronously and in-class. Avoiding the use of a script can ensure that students are not simply reading out someone else’s text or an AI generated script, whilst the questions posed at the end can allow assessors to gauge the depth of understanding of the topic and structure presented. (Click here for further guidance on academic integrity .)

Recorded presentations (asynchronous) may be produced with help, and additional mechanisms to ensure that the work presented is their own work may be beneficial - such as a reflective account, or a live Q&A session. AI can create scripts, slides and presentations, copy real voices relatively convincingly, and create video avatars, these tools can enable students to create professional video content, and may make this sort of assessment more accessible. The desirability of such tools will depend upon what you are aiming to assess and how you will evaluate student performance.

Student and staff experience

Oral presentations provide a useful opportunity for students to practice skills which are required in the world of work. Through the process of preparing for an oral presentation, students can develop their ability to synthesise information and present to an audience. To improve authenticity the assessment might involve the use of an actual audience, realistic timeframes for preparation, collaboration between students and be situated in realistic contexts, which might include the use of AI tools.

As mentioned above it is important to remember that the stress of presenting information to a public audience might put some students at a disadvantage. Similarly non-native speakers might perceive language as an additional barrier. AI may reduce some of these challenges, but it will be important to ensure equal access to these tools to avoid disadvantaging students. Discussing criteria and expectations with your students, providing a clear structure, ensuring opportunities to practice and receive feedback will benefit all students.

Some disadvantages of oral presentations include:

  • anxiety - students might feel anxious about this type of assessment and this might impact on their performance
  • time - oral assessment can be time consuming both in terms of student preparation and performance
  • time - to develop skill in designing slides if they are required; we cannot assume knowledge of PowerPoint etc.
  • lack of anonymity and potential bias on the part of markers.

From a student perspective preparing for an oral presentation can be time consuming, especially if the presentation is supported by slides or a poster which also require careful design.

From a teacher’s point of view, presentations are generally assessed on the spot and feedback is immediate, which reduces marking time. It is therefore essential to have clearly defined marking criteria which help assessors to focus on the intended learning outcomes rather than simply on presentation style.

Useful resources

Joughin, G. (2010). A short guide to oral assessment . Leeds Metropolitan University/University of Wollongong http://eprints.leedsbeckett.ac.uk/2804/

Race, P. and Brown, S. (2007). The Lecturer’s Toolkit: a practical guide to teaching, learning and assessment. 2 nd edition. London, Routledge.

Annotated bibliography

Class participation

Concept maps

Essay variants: essays only with more focus

  • briefing / policy papers
  • research proposals
  • articles and reviews
  • essay plans

Film production

Laboratory notebooks and reports

Objective tests

  • short-answer
  • multiple choice questions

Patchwork assessment

Creative / artistic performance

  • learning logs
  • learning blogs

Simulations

Work-based assessment

Reference list

The Teaching Knowledge Base

  • Digital Teaching and Learning Tools
  • Assessment and Feedback Tools

Assessment Criteria and Rubrics

An introduction.

This guide is an introduction to:

  • Writing an assessment brief with clear assessment criteria and rubrics
  • Grading tools available in Turnitin enabling the use of criteria and rubrics in marking.

Clear and explicit assessment criteria and rubrics are meant to increase the transparency of the assessment and aim to develop students into ‘novice assessors’ (Gipps, 1994) and facilitating deep learning.  Providing well-designed criteria and rubrics, contributes to communicating assessment requirements that can be more inclusive to all (including markers) regardless of previous learning experiences, and or individual differences in language, cultural and educational background.  It also facilitates the development of self-judgment skills (Boud & Falchikov, 2007).

  • Assessment brief
  • Assessment criteria
  • Assessment rubric
  • Guidance in how to create rubrics and grading forms
  • Guidance on how to create a rubric in Handin

Terminology Explored

The terms ‘assessment brief’ , ‘assessment criteria’ and ‘assessment rubrics’ however, are often used interchangeably and that may lead to misunderstandings and impact on the effectiveness of the design and interpretation of the assessment brief.  Therefore, it is important to first clarify these terms:

Assessment Brief

An assessment (assignment) brief refers to the instructions provided to communicate the requirements and expectations of assessment tasks, including the assessment criteria and rubrics to students.  The brief should clearly outline which module learning outcomes will assessed in the assignment.

NOTE: If you are new to writing learning outcomes, or need a refresher, have a look at Baume’s guide to “Writing and using good learning outcomes”, (2009).  See list of references.

When writing an assessment brief, it may be useful to consider the following questions with regards to your assessment brief:

  • Have you outlined clearly what type of assessment you require students to complete?  For example, instead of “written assessment”, outline clearly what type of written assessment you require from your students; is it a report, a reflective journal, a blog, presentation, etc.  It is also recommended to give a breakdown of the individual tasks that make up the full assessment within the brief, to ensure transparency.
  • Is the purpose of the assessment immediately clear to your students, i.e. why the student is being asked to do the task?  It might seem obvious to you as an academic, but for students new to academia and the subject discipline, it might not be clear.  For example, explain why they have to write a reflective report or a journal and indicate which module learning outcomes are to be assessed in this specific assessment task.
  • Is all the important task information clearly outlined, such as assessment deadlines, word count, criteria and further support and guidance?

Assessment Criteria

Assessment criteria communicate to students the knowledge, skills and understanding (thus in line with the expected module learning outcomes) the assessors expect from students to evidence in any given assessment task.  To write a good set of criteria, the focus should be on the characteristics of the learning outcomes that the assignment will evidence and not only consider the characteristics of the assignment (task), i.e., presentation, written task, etc.

Thus, the criteria outlines what we expect from our students (based on learning outcomes), however it does not in itself make assumptions about the actual quality or level of achievement (Sadler, 1987: 194) and needs to be refined in the assessment rubric.  

When writing an assessment brief, it may be useful to consider the following questions with regards to the criteria that will be applied to assess the assignment:

  • Are your criteria related and aligned with the module and (or) the course learning outcomes?
  • What are the number of criteria you will assess in any particular task?  Consider how realistic and achievable this may be.
  • Are the criteria clear and have you avoided using any terms not clear to students (academic jargon)?
  • Are the criteria and standards (your quality definitions) aligned with the level of the course?   For guidance, consider revisiting the  Credit Level Descriptors (SEEC, 2016) and the QAA Subject Benchmarks in Framework for the Higher Education Qualifications that are useful starting points to consider.

Assessment Rubric

The assessment rubric, forms part of a set of criteria and refers specifically to the “levels of performance quality on the criteria.” (Brookhart & Chen, 2015, p. 343)

Generally, rubrics are categorised into two categories, holistic and or analytic. A holistic rubric assesses an assignment as a whole and is not broken down into individual assessment criteria.  For the purpose of this guidance, the focus will be on an analytic rubric that provides separate performance descriptions for each criterion.

An assessment rubric is therefore a tool used in the process of assessing student work that usually includes essential features namely the:  

  • Scoring strategy – Can be numerical of qualitative, associated with the levels of mastery (quality definitions). (Shown as SCALE in Turnitin)
  • Quality definitions (levels of mastery) – Specify the levels of achievement / performance in each criterion.

 (Dawson, 2017).

The figure below, is an example of the features of a complete rubric including the assessment criteria. 

When writing an assessment brief, it may be useful to consider the following questions with regards to firstly, the assessment brief, and secondly, the criteria and associated rubrics.

  • Does your scoring strategy clearly define and cover the whole grading range?  For example, do you distinguish between the distinctions (70-79%) and 80% and above?
  • Are the words and terms used to indicate level of mastery, clearly outlining and enabling students to distinguish between the different judgements?  For example, how do you differentiate between work that is outstanding, excellent and good?
  • Is the chosen wording in your rubric too explicit?  It should be explicit but at the same time not overly specific to avoid students adopting a mechanistic approach to your assignment.  For example, instead of stating a minimum number references, consider stating rather effectiveness or quality of the use of literature, and or awareness or critical analysis of supporting literature.

NOTE: For guidance across Coventry University Group on writing criteria and rubrics, follow the links to guidance.

 POST GRADUATE Assessment criteria and rubrics (mode R)

 UNDER GRADUATE Assessment criteria and rubrics (mode E)

Developing Criteria and Rubrics within Turnitin

Within Turnitin, depending on the type of assessment, you have a choice between four grading tools:

  • Qualitative Rubric – A rubric that provides feedback but has no numeric scoring.  More descriptive than measurable.  This rubric is selected by choosing the ‘0’ symbol at the base of the Rubric.
  • Standard Rubric – Used for numeric scoring.  Enter scale values for each column (rubric score) and percentages for each criteria row, combined to be equal to 100%.  This rubric can calculate and input the overall grade.  This rubric is selected by choosing the % symbol at the base of the Rubric window.
  • Custom Rubric – Add criteria (row) and descriptive scales (rubric), when marking enter (type) any value directly into each rubric cell.  This rubric will calculate and input the overall grade.  This rubric is selected by choosing the ‘Pencil’ symbol at the base of the Rubric window.
  • Grading form – Can be used with or without numerical score.  If used without numerical score, then it is more descriptive feedback.  If used with numerical scoring, this can be added together to create an overall grade.  Note that grading forms can be used without a ‘paper assignment’ being submitted, for example, they can be used to assess work such as video submission, work of art, computer programme or musical performance.

Guidance on how to Create Rubric and Grading Forms

Guidance by Turnitin:

https://help.turnitin.com/feedback-studio/turnitin-website/instructor/rubric-scorecards-and-grading-forms/creating-a-rubric-or-grading-form-during-assignment-creation.htm

University of Kent – Creating and using rubrics and grading form (written guidance):

https://www.kent.ac.uk/elearning/files/turnitin/turnitin-rubrics.pdf

Some Examples to Explore

It is useful to explore some examples in Higher Education, and the resource developed by UCL of designing generic assessment criteria and rubrics from level 4 to 7, is a good starting point.

Guidance on how to Create Rubric in Handin

Within Handin, depending on the type of assessment, you have a choice between three grading tools, see list below, as well as the choice to use “free-form” grading that allows you to enter anything in the grade field when grading submissions.

  • None = qualitative
  • Range = quantitative – can choose score from range
  • Fixed = quantitative – one score per level

Guide to Handin: Creating ungraded (“free-form”) assignments

https://aula.zendesk.com/hc/en-us/articles/360053926834

Guide to Handin: Creating rubrics https://aula.zendesk.com/hc/en-us/articles/360017154820-How-can-I-use-Rubrics-for-Assignments-in-Aula-

References and Further Reading

Baume, D (2009) Writing and using good learning outcomes. Leeds Metropolitan University. ISBN 978-0-9560099-5-1 Link to Leeds Beckett Repository record: http://eprints.leedsbeckett.ac.uk/id/eprint/2837/1/Learning_Outcomes.pdf

Boud, D & Falchikov, N. (2007) Rethinking Assessment in Higher Education. London: Routledge.

Brookhart, S.M. & Chen, F. (2015) The quality and effectiveness of descriptive rubrics, Educational Review, 67:3, pp.343-368.  http://dx.doi.org/10.1080/00131911.2014.929565

Dawson, P. (2017) Assessment rubrics: Towards clearer and more replicable design, research and practice. Assessment & Evaluation in Higher Education, 42(3), pp.347-360. https://doi.org/10.1080/02602938.2015.1111294

Gipps, C.V. (1994) Beyond testing: Towards a theory of educational assessment. Psychology Press.

Sadler, D.R. (1987) Specifying and promulgating achievement standards. Oxford Review of Education, 13(2), pp.191-209.

SEEC (2016) Credit Level Descriptors. Available: http://www.seec.org.uk/wp-content/uploads/2016/07/SEEC-descriptors-2016.pdf

UK QAA Quality Code (2014) Part A – Setting and Maintaining Academic Standards. Available: https://www.qaa.ac.uk/docs/qaa/quality-code/qualifications-frameworks.pdf

Was this article helpful?

  • Center for Innovative Teaching and Learning
  • Instructional Guide
  • Rubrics for Assessment

A rubric is an explicit set of criteria used for assessing a particular type of work or performance (TLT Group, n.d.) and provides more details than a single grade or mark. Rubrics, therefore, will help you grade more objectively.

Have your students ever asked, “Why did you grade me that way?” or stated, “You never told us that we would be graded on grammar!” As a grading tool, rubrics can address these and other issues related to assessment: they reduce grading time; they increase objectivity and reduce subjectivity; they convey timely feedback to students and they improve students’ ability to include required elements of an assignment (Stevens & Levi, 2005). Grading rubrics can be used to assess a range of activities in any subject area

Elements of a Rubric

Typically designed as a grid-type structure, a grading rubric includes criteria, levels of performance, scores, and descriptors which become unique assessment tools for any given assignment. The table below illustrates a simple grading rubric with each of the four elements for a history research paper. 

Criteria identify the trait, feature or dimension which is to be measured and include a definition and example to clarify the meaning of each trait being assessed. Each assignment or performance will determine the number of criteria to be scored. Criteria are derived from assignments, checklists, grading sheets or colleagues.

Examples of Criteria for a term paper rubric

  • Introduction
  • Arguments/analysis
  • Grammar and punctuation
  • Internal citations

Levels of performance

Levels of performance are often labeled as adjectives which describe the performance levels. Levels of performance determine the degree of performance which has been met and will provide for consistent and objective assessment and better feedback to students. These levels tell students what they are expected to do. Levels of performance can be used without descriptors but descriptors help in achieving objectivity. Words used for levels of performance could influence a student’s interpretation of performance level (such as superior, moderate, poor or above or below average).

Examples to describe levels of performance

  • Excellent, Good, Fair, Poor
  • Master, Apprentice, Beginner
  • Exemplary, Accomplished, Developing, Beginning, Undeveloped
  • Complete, Incomplete
Levels of performance determine the degree of performance which has been met and will provide for consistent and objective assessment and better feedback to students.

Scores make up the system of numbers or values used to rate each criterion and often are combined with levels of performance. Begin by asking how many points are needed to adequately describe the range of performance you expect to see in students’ work. Consider the range of possible performance level.

Example of scores for a rubric

1, 2, 3, 4, 5 or 2, 4, 6, 8

Descriptors

Descriptors are explicit descriptions of the performance and show how the score is derived and what is expected of the students. Descriptors spell out each level (gradation) of performance for each criterion and describe what performance at a particular level looks like. Descriptors describe how well students’ work is distinguished from the work of their peers and will help you to distinguish between each student’s work. Descriptors should be detailed enough to differentiate between the different level and increase the objectivity of the rater.

Descriptors...describe what performance at a particular level looks like.

Developing a Grading Rubric

First, consider using any of a number of existing rubrics available online. Many rubrics can be used “as is.” Or, you could modify a rubric by adding or deleting elements or combining others for one that will suit your needs. Finally, you could create a completely customized rubric using specifically designed rubric software or just by creating a table with the rubric elements. The following steps will help you develop a rubric no matter which option you choose.

  • Select a performance/assignment to be assessed. Begin with a performance or assignment which may be difficult to grade and where you want to reduce subjectivity. Is the performance/assignment an authentic task related to learning goals and/or objectives? Are students replicating meaningful tasks found in the real world? Are you encouraging students to problem solve and apply knowledge? Answer these questions as you begin to develop the criteria for your rubric.
Begin with a performance or assignment which may be difficult to grade and where you want to reduce subjectivity.
  • List criteria. Begin by brainstorming a list of all criteria, traits or dimensions associated task. Reduce the list by chunking similar criteria and eliminating others until you produce a range of appropriate criteria. A rubric designed for formative and diagnostic assessments might have more criteria than those rubrics rating summative performances (Dodge, 2001). Keep the list of criteria manageable and reasonable.
  • Write criteria descriptions. Keep criteria descriptions brief, understandable, and in a logical order for students to follow as they work on the task.
  • Determine level of performance adjectives.  Select words or phrases that will explain what performance looks like at each level, making sure they are discrete enough to show real differences. Levels of performance should match the related criterion.
  • Develop scores. The scores will determine the ranges of performance in numerical value. Make sure the values make sense in terms of the total points possible: What is the difference between getting 10 points versus 100 points versus 1,000 points? The best and worst performance scores are placed at the ends of the continuum and the other scores are placed appropriately in between. It is suggested to start with fewer levels and to distinguish between work that does not meet the criteria. Also, it is difficult to make fine distinctions using qualitative levels such as never, sometimes, usually or limited acceptance, proficient or NA, poor, fair, good, very good, excellent. How will you make the distinctions?
It is suggested to start with fewer [score] levels and to distinguish between work that does not meet the criteria.
  • Write the descriptors. As a student is judged to move up the performance continuum, previous level descriptions are considered achieved in subsequent description levels. Therefore, it is not necessary to include “beginning level” descriptors in the same box where new skills are introduced.
  • Evaluate the rubric. As with any instructional tool, evaluate the rubric each time it is used to ensure it matches instructional goals and objectives. Be sure students understand each criterion and how they can use the rubric to their advantage. Consider providing more details about each of the rubric’s areas to further clarify these sections to students. Pilot test new rubrics if possible, review the rubric with a colleague, and solicit students’ feedback for further refinements.

Types of Rubrics

Determining which type of rubric to use depends on what and how you plan to evaluate. There are several types of rubrics including holistic, analytical, general, and task-specific. Each of these will be described below.

All criteria are assessed as a single score. Holistic rubrics are good for evaluating overall performance on a task. Because only one score is given, holistic rubrics tend to be easier to score. However, holistic rubrics do not provide detailed information on student performance for each criterion; the levels of performance are treated as a whole.

  • “Use for simple tasks and performances such as reading fluency or response to an essay question . . .
  • Getting a quick snapshot of overall quality or achievement
  • Judging the impact of a product or performance” (Arter & McTighe, 2001, p 21)

Each criterion is assessed separately, using different descriptive ratings. Each criterion receives a separate score. Analytical rubrics take more time to score but provide more detailed feedback.

  • “Judging complex performances . . . involving several significant [criteria] . . .
  • Providing more specific information or feedback to students . . .” (Arter & McTighe, 2001, p 22)

A generic rubric contains criteria that are general across tasks and can be used for similar tasks or performances. Criteria are assessed separately, as in an analytical rubric.

  • “[Use] when students will not all be doing exactly the same task; when students have a choice as to what evidence will be chosen to show competence on a particular skill or product.
  • [Use] when instructors are trying to judge consistently in different course sections” (Arter & McTighe, 2001, p 30)

Task-specific

Assesses a specific task. Unique criteria are assessed separately. However, it may not be possible to account for each and every criterion involved in a particular task which could overlook a student’s unique solution (Arter & McTighe, 2001).

  • “It’s easier and faster to get consistent scoring
  • [Use] in large-scale and “high-stakes” contexts, such as state-level accountability assessments
  • [Use when] you want to know whether students know particular facts, equations, methods, or procedures” (Arter & McTighe, 2001, p 28) 

Grading rubrics are effective and efficient tools which allow for objective and consistent assessment of a range of performances, assignments, and activities. Rubrics can help clarify your expectations and will show students how to meet them, making students accountable for their performance in an easy-to-follow format. The feedback that students receive through a grading rubric can help them improve their performance on revised or subsequent work. Rubrics can help to rationalize grades when students ask about your method of assessment. Rubrics also allow for consistency in grading for those who team teach the same course, for TAs assigned to the task of grading, and serve as good documentation for accreditation purposes. Several online sources exist which can be used in the creation of customized grading rubrics; a few of these are listed below.

Arter, J., & McTighe, J. (2001). Scoring rubrics in the classroom: Using performance criteria for assessing and improving student performance. Thousand Oaks, CA: Corwin Press, Inc.

Stevens, D. D., & Levi, A. J. (2005). Introduction to rubrics: An assessment tool to save grading time, convey effective feedback, and promote student learning. Sterling, VA: Stylus.

The Teaching, Learning, and Technology Group (n.d.). Rubrics: Definition, tools, examples, references. http://www.tltgroup.org/resources/flashlight/rubrics.htm

Selected Resources

Dodge, B. (2001). Creating a rubric on a given task. http://webquest.sdsu.edu/rubrics/rubrics.html

Wilson, M. (2006). Rethinking rubrics in writing assessment. Portsmouth, NH: Heinemann.

Rubric Builders and Generators

eMints.org (2011). Rubric/scoring guide. http://www.emints.org/webquest/rubric.shtml

General Rubric Generator. http://www.teach-nology.com/web_tools/rubrics/general/

RubiStar (2008). Create rubrics for your project-based learning activities. http://rubistar.4teachers.org/index.php

Creative Commons License

Suggested citation

Northern Illinois University Center for Innovative Teaching and Learning. (2012). Rubrics for assessment. In Instructional guide for university faculty and teaching assistants. Retrieved from https://www.niu.edu/citl/resources/guides/instructional-guide

  • Active Learning Activities
  • Assessing Student Learning
  • Direct vs. Indirect Assessment
  • Examples of Classroom Assessment Techniques
  • Formative and Summative Assessment
  • Peer and Self-Assessment
  • Reflective Journals and Learning Logs
  • The Process of Grading

Phone: 815-753-0595 Email: [email protected]

Connect with us on

Facebook page Twitter page YouTube page Instagram page LinkedIn page

criteria based assessment presentation

Criterion-Referenced Assessment: Evaluating Student Learning Against Set Standards

Understanding the basics: criterion-referenced assessment definition.

Criterion-referenced assessment (CRA) is a type of evaluation used to measure a student's performance against a set of predetermined criteria or standards. Unlike norm-referenced assessments, which compare a student's performance to the performance of a group, CRAs focus solely on what an individual student knows, not how they compare to others. An article by the Journal of Learning Design , states that criterion-referenced assessment arguably results in greater reliability, validity, and transparency than norm-referenced assessment. Ultimately, the choice between criterion-referenced assessment (CRA) and norm-referenced assessment (NRA) often depends on the purpose of the assessment and the information you want to gather.

Diving Deeper: The Purpose of Criterion-Referenced Assessment

The primary purpose of criterion-referenced assessments is to evaluate whether a student has achieved specific learning objectives. One of the defining characteristics of a criterion-referenced assessment is that these assessments measure students against predetermined standards. Standards in education are used to guide public school instruction, assessment, and curriculum. They define the knowledge and skills students should possess at critical points in their learning journey. 

For teachers, criterion-referenced assessments provide a roadmap for instruction. They can help teachers identify what students already know and what they still need to learn, allowing teachers to tailor their instruction to meet the needs of individual students. For students, criterion-referenced assessments can provide a clear understanding of what they are expected to learn and be able to do. They can help students set learning goals and monitor their own progress toward these goals.

For parents, criterion-referenced assessments offer a clear and objective measure of their child's learning and provide parents with valuable information to support their learning at home. In addition to these purposes, criterion-referenced assessments also play a crucial role in educational policy and practice. They are often used to monitor the effectiveness of educational programs and interventions, to hold schools and teachers accountable for student learning, and to inform decisions about curriculum and instruction.

Criterion-Referenced Assessment vs. Summative Assessment

While a summative assessment could be a criterion-referenced assessment if it's designed to measure student performance against a specific set of standards, not all summative assessments are criterion-referenced. For example, a final exam that ranks students in comparison to each other ( norm-referenced assessment ) rather than against a set standard would be a summative assessment but not a criterion-referenced assessment. Similarly, not all criterion-referenced assessments are summative; they could also be used in a formative way to guide instruction throughout a unit.

Case Studies: Effective Use of Criterion-Referenced Assessments in Classrooms

Below are some examples of Criterion-Referenced assessments: 

  • Standardized Tests: These are often exams that measure individual student performance based on set standards. Examples include state achievement tests. 
  • Chapter Tests: Many textbooks include chapter tests that are criterion-referenced. These tests measure a student's understanding of the specific content and skills presented in that chapter.
  • Quizzes: Teachers often create their own quizzes and unit tests that are criterion-referenced. These assessments measure a student's understanding of the content and skills that have been taught in a particular unit of study.
  • Performance Assessments : These require students to perform a task rather than answer questions. For example, in a science class, a performance assessment might require students to design and conduct an experiment. The criteria for success would be clearly defined and shared with students in advance.
  • Portfolio Assessments: These involve a collection of a student's work over time, which is evaluated against a set of criteria. This could include writing samples, art projects, or other evidence of a student's learning.

To illustrate the effectiveness of criterion-referenced assessments, let's consider a few case studies:

Elementary Mathematics:

In a third-grade classroom, a teacher uses a criterion-referenced assessment to evaluate students' understanding of multiplication. The assessment includes a variety of problems that are aligned with their content standards and require students to demonstrate their ability to multiply single- and multi-digit numbers. The results of the assessment show that while most students have mastered single-digit multiplication, many are struggling with multi-digit multiplication. 

High School Science:

A high school biology teacher uses a criterion-referenced assessment to evaluate students' understanding of cell structure and function. The assessment includes multiple-choice questions, short-answer questions, and a diagram labeling activity. The results of the assessment show that while most students understand the basic structure of a cell, many are struggling to understand the function of specific organelles. 

College-Level Writing:

A college professor uses a criterion-referenced assessment to evaluate students' writing skills. The assessment includes a rubric that outlines specific criteria for effective writing, including organization, clarity, grammar and mechanics, and use of evidence. The results of the assessment show that while most students are proficient in organization and clarity, many need improvement in grammar and mechanics, and use of evidence. 

Criterion-Referenced Reading Assessment: A Closer Look

Criterion-referenced reading assessments are a powerful tool in the realm of literacy education. These assessments can measure a student's reading ability against a set standard, such as grade-level expectations. For example, a criterion-referenced reading assessment might evaluate a student's ability to identify the main idea in a passage, use context clues to determine the meaning of unfamiliar words or make inferences based on information in the text.

In conclusion, criterion-referenced assessments offer a unique perspective, focusing on individual mastery rather than comparative performance. Whether it's reading, math, or any other subject, these assessments can provide invaluable insights into a student's learning journey.

People also ask about Criterion-Referenced Assessment:

What is the criterion-referenced assessment.

A criterion-referenced assessment is a type of evaluation that measures a student's performance against a set of predetermined criteria or standards. It focuses on what a student knows or can do, rather than comparing their performance to others.

What are the examples of criterion-referenced assessment?

Examples of criterion-referenced assessments include chapter tests in a textbook, driver's license written exams, many certification tests, and most classroom tests that teachers develop to assess specific topics or skills.

What is the importance of criterion-referenced assessment?

Criterion-referenced assessments are important because they allow educators to measure a student's understanding or skill level in relation to a specific set of standards or criteria. They can help identify a student's strengths and weaknesses, guide instruction, and provide meaningful feedback to students about their learning progress.

How do criterion-referenced assessments differ from criterion-based assessments?

While the terms "criterion-referenced assessment" and "criterion-based assessment" are often used interchangeably, there are subtle differences between the two. Both types of assessments measure a student's performance against a set of criteria or standards. However, criterion-based assessments typically refer to a broader category of assessments, including performance-based assessments, authentic assessments, and portfolio assessments, which may not strictly adhere to the pass/fail nature of criterion-referenced assessments.

UTS-header

LX / Authentic assessment: an oral pitch

Authentic assessment: an oral pitch

This is a business based technique that can be utilised in many disciplines where persuasive oral language is required. The assessment tasks can be applied to real world settings.

On this page:

Outcomes of the assessment, creating an oral pitch, assessing the oral pitch, uts mba case study, what research tells us.

  • Argue a point to a specified audience using appropriate verbal skills
  • Present a clearly constructed solution to a problem 
  • Review, analyse and reflect upon the success of a presentation through feedback, to inform future opportunities

Suitable for

This can be an individual or group assessment task. It is adaptable to any discipline where a persuasive, short, oral presentation is required.

Assessment type

The assessment can be either formative or summative. It can stand alone but equally scaffold into a larger assessment piece such as a project, a report, a design, a model or prototype, or another larger presentation. A pitch can be an individual, pair or a group assessment. It can be presented face to face, synchronously through Zoom or Teams or asynchronously, using video.

When to introduce

Early in the semester as it can easily scaffold into other more developed assessments.

Time required

What tools you can use.

  • Canvas Rich Content Editor
  • Presentation tools (MS PowerPoint, Prezi, Canva, Slide Bot) with voice over
  • Video recording (Zoom)

A pitch can be videoed using Kaltura and then uploaded into Canvas. A poster design or a visual storyboard can enhance a pitch. Powerpoint, Prezi, Canva, Slide Bot can all be used to enhance your visuals. However, the technology should never distract from the presenter and the clear message.

Guiding principles

  • Authentic assessment
  • Active learning
  • Practice-oriented learning

Attribution

IML, UTS Business School

A pitch is a short, persuasive, intentional presentation of an innovative idea, problem solution, prototype or business proposition directed at a specific audience. Think “Shark Tank” (a reality television program where entrepreneurs pitched their business models to investors). The objective of the pitch is to sway the audience or in essence win them over to; applaud, agree with or invest in your idea.

Preparing a pitch involves thorough understanding of the audience, deliberate framing of the problem and solution, the resources required (the “ask”), and the method by which all of this will be communicated. (Neck, Neck, & Murray, 2020)

Why it’s authentic

The pitch gives students a real-world opportunity to demonstrate a technique that is used widely in the business sector to reach potential investors and to convince an audience of a solution to a real-world problem.

Different types of pitch

Some popular pitches include: 

  • the elevator pitch , a brief, persuasive speech that you use to spark interest in a project, product, idea or yourself an elevator pitch can be utilised in any genre but it’s intended to take as long as an elevator ride, 20-30 seconds; 
  • the rocket pitch , a three-minute entrepreneurial pitch with slides about an idea with instant audience feedback;  
  • the Pixar pitch , based upon the successful animation company of the same name, utilises the narrative of a story (with a guiding framework) that draws the listeners in and continues to engage them as they follow a storyline; 
  • The question pitch elicits an active response, engages listeners and encourages them to agree with premise.

A pitch should be brief. Depending on the type of pitch, the length varies from 20-30 seconds (elevator pitch) to 3-5 minutes (Pixar pitch).

Where to include the pitch as an assessment task

  • Project-based learning across disciplines
  • Business studies for example marketing and advertising
  • Innovation and Entrepreneurship subjects
  • Career development tasks
  • Digital communications subjects
  • Problem-solving activities across faculties
  • Pitching a book or article idea

How students have responded

Feedback collected from research undertaken in the UK by Smith in 2012 reflected that students enjoyed the pitch as an assessment offering, largely because it addressed a variety of learning styles that were often unaccommodated by regular assessments. Students found the skills and learning experiences undertaken related to and positioned them well for their future careers.

Tips for students

Ensure that your students:

  • Rehearse the pitch
  • Speak to the listeners(eye contact); avoid reading
  • Speak clearly and evenly paced
  • Pause for effect and breath
  • Make the pitch personable/relatable or use humour
  • Create a narrative the listeners can follow and engage them fully.
  • Use rhetorical questions
  • Leave the audience with a call to action (a quote, a return to your original premise, something to do or think about, a question)

Depending on the discipline, the objective of the pitch should be outlined to ensure students understand the expectations of the facilitator. There are two main components in the pitch: content and delivery. Here is a sample marking guide. This could be fleshed out to create a specific marking rubric.

Marking criteria example

Advanced MBA 21949 Challenge/Opportunity Discovery and Bachelor of Management/Business 21643 Innovation Lab

This professionally integrated subject equips students with the skills, theoretical and analytical knowledge necessary; to examine and solve authentic real-world problems that impact upon industry and society. Students conceive innovative and digital strategies, identify megatrends and potentially provide solutions that transform organisations. Students are encouraged to collaborate creatively. The pitch as a formative assessment task is used in an authentic way whereby students are challenged to convincingly present their solutions, take on feedback and polish their pitch in a final presentation to industry judges, colleagues and mentors.

Interim Oral Pitch (formative assessment)

Challenge: Pitch a solution to a real world problem in groups. Each team gave an ‘elevator pitch’ – 2minutes – addressing two dimensions:

  • How desirable is the solution – how much impact would it have?
  • How easy to implement – what does it take to make it happen? The lecturer/tutor made a whiteboard matrix, and each participant could put their judgement of where the pitch sat in terms of each dimension. Every team then got instant feedback and an idea of where it was positioned. It was easier to set up, to see and to manage due to the instant feedback tool.

Then in breakout rooms – 2 buddy teams in each – had to play devil’s advocate with each other to really get in-depth feedback – why this idea will fail – how to improve – what worked really well – Students really enjoyed the smaller group feedback and also socially overcame some fatigue as they met new people from random breakout allocation. The lecturer/tutor and mentors popped in to assist.

Shark Tank-style group pitch

Students tend to have very good outcomes from this industry-style pitch, having had feedback from the elevator pitch and learned and developed good pitching presentation skills.

The final group pitch is held as a shark tank style pitch in teams. There are three industry judges, and experts and mentors are able to complete an online poll and use the chat to give instant feedback. As well as the judge’s decision, a people’s choice award is given. Tutors are upskilled to teach students effective pitching skills. This ensures innovative and creative pitch presentations from all groups.

…engaging and worthy problems or questions of importance, in which students must use knowledge to fashion performances effectively and creatively. The tasks are either replicas of or analogous to the kinds of problems faced by adult citizens and consumers or professionals in the field. (Wiggins 1993)

Increasingly students need to feel relevance and connectivity to learning activities and assessment tasks. Authentic assessment enables students to identify a context and recognise that theoretical knowledge has a more complex application when applied to scenarios. As they draw together their knowledge and skills to engage productively and solve problems, their behaviour clearly shows, both to staff and themselves, the level of capacity or competency they have gained. Authenticity is a fundamental characteristic of good assessment practice, and students usually value it highly.

Bosco, A.M., & Ferns, S. (2014). Embedding of Authentic Assessment in Work-Integrated Learning Curriculum. Asia-Pacific Journal Of Cooperative Education, v15(n4), p281-290. Retrieved from https://eric.ed.gov/?id=EJ1113553

Crafting an Elevator Pitch: Introducing Your Company Quickly and Compellingly. Mindtools.com. (2020). Retrieved from https://www.mindtools.com/pages/article/elevator-pitch.htm.

Herrington, J., Reeves, T., & Oliver, R. (2010). A guide to authentic e – learning. Retrieved from http://authenticlearning.info/DesignBasedResearch/Design-based_research_files/Chapter9Researching.pdf

Neck, H., Neck, C., & Murray, E. (2020) Entrepreneurship the practice and mindset. Retrieved from https://edge.sagepub.com/neckentrepreneurship/student-resources/chapter-16/learning-objectives

Smith, M. (2012). Improving student engagement with employability: the project pitch assessment. Planet,26(1), 2-7. doi: 10.11120/plan.2012.00260002

Wiggins, G. P. (1993). The Jossey-Bass education series. Assessing student performance: Exploring the purpose and limits of testing. Jossey-Bass. Retrieved from https://psycnet.apa.org/record/1993-98969-000

Citation and attribution

Wehr, D., Randhawa, K (2020). “Authentic assessment: An Oral Pitch ” in Adaptable Resources for Teaching with Technology, LX.Lab, Institute for Interactive Media & Learning, University of Technology, Sydney.

“Authentic assessment: An Oral Pitch” by Dimity Wehr, and Krithika Randhawa, in Adaptable Resources for Teaching with Technology by Institute for Interactive Media & Learning, University of Technology, Sydney. Available under Creative Commons Licence Attribution-NonCommercial-ShareAlike 4.0 International

creative commons license

Related blogs

What does authentic assessment mean to you, still need help.

Get in touch with the LX.lab team by logging a ticket via ServiceConnect. We'll be in touch shortly.

Want to provide feedback on this resource? Please log in first via the top nav menu.

Grades submission process | 19 June

  • Wednesday, 19 June, 2024 11:00 am-12:00 pm

Design, implement and assess video-based tasks | 23 May

  • Thursday, 23 May, 2024 11:00 am-12:00 pm

Library Glion

How to write assessment criteria

Key features of good assessment criteria, steps to defining assessment criteria.

Tips for developing grading rubrics

Areas to consider when developing criteria statements

Assessment criteria checklist, using criteria for giving feedback to students, self-evaluation and peer feedback.

criteria based assessment presentation

The over-riding principles in the design and use of assessment criteria are that students, teachers, and other stakeholders should clearly understand:

  • how and why marks/grades have been awarded
  • that they have been awarded fairly.

Students are assessed on how well they have met the learning outcomes of a course, and therefore, the assessment criteria must be designed to assess just this.  In other words, assessment criteria indicate how the student’s work will be judged in relation to the learning outcomes.

There should also be a clear indication of the level required to pass – the threshold level (expressed in positive terms).

Further information about what is required to achieve a particular grade/mark should be expressed in a more detailed breakdown of each criterion. Differentiated descriptors articulate how the level of achievement will be judged. The criteria and differentiated level descriptors can be conveniently organised into a grading rubric

Assessment criteria should:

  • relate directly to the course learning outcomes
  • indicate what is required at a pass level, (the threshold level) in a positive way
  • help students know what they need to do and how to do it
  • help students understand what you expect at differing levels of achievement
  • be understandable to all stakeholders
  • be manageable in number
  • be distinct from each other
  • be seen as an indication of achievement rather than an exact measurement.

When developing criteria, you might find it helpful to think of them in terms of a flow chart, linking one stage to the next:

criteria based assessment presentation

5.1 Start with the course learning Outcomes. The criteria should cover all the course learning outcomes and there is a very close relationship between them and the assessment criteria – many of the words will be repeated. The difference is that the assessment criteria describe the level of performance required, often through the use of more evaluative words e.g. thorough, clear, accurate, wide ranging, rigorous, main, meaningful, well-reasoned. (You may need to further clarify your meaning of these words.)  Your course learning outcomes should be expressed in terms of what is required to pass the course (the threshold level).

Learning Outcome at Level 6:  You will be able to critically apply costing methods to make informed decisions within the hospitality industry.

Assessment criterion:  You will be judged on…” the accuracy of your application of costing methods and the validity of your evidence-based decisions’’. 

Pass level descriptor: For a typical pass, you will ….’ Accurately use costing methods to produce reliable data and use this data to make appropriate decisions.

5.2 Identify your own implicit criteria which influence your judgement. 

When you’re marking learners’ work, what features influence your perception – either positively or negatively? What do you value? Tutors may often have implicit criteria – e.g. style, presentation, integration of sources. To ensure equity and parity, learners need to be clear about all the criteria used to mark their work.  You could select 3 pieces of work, ranging from fail to excellent. Describe those characteristics which denote the level of achievement and use these to help you develop descriptors. 

5.3 Look at existing examples of criteria. 

You may be able to save much time and energy by looking at examples of criteria to modify and adapt. You may find it helpful to use others’ words, if they express and articulate your thoughts. You may also be able to eliminate some examples as inappropriate for your needs. However, you must bear in mind that your assessment criteria MUST relate to the learning outcomes for each individual course.  

5.4 Identify the differences between grades/marks.  

Having decided on the aspects to be judged (i.e. the assessment criteria), the next question is ‘What will a learner need to demonstrate in order to achieve a specific grade/mark?’ (i.e. the standards or the differentiated descriptors).  The use of a grading rubric grid can be particularly useful. Grading rubrics can:

  • increase transparency for all stakeholders.
  • facilitate moderation.
  • be useful if there are several course teachers/
  • be built into the course Moodle page, making grading faster.
  • be particularly useful in providing the basis for giving feedback.

Tips for developing grading rubrics :

Developing grading rubrics takes time – to articulate your thoughts, to select appropriate and meaningful language, to clarify the progression from one grade to another, and to check out understanding with others. Nevertheless, it is a valuable and important exercise.

  • It helps to start by developing the criteria for the 50% column – what is the minimum, threshold standard required to pass? Once this is articulated, you can build up and down the columns.
  • Phrase the 50%/pass descriptor in positive terms. Words such as ‘inadequate’, ‘limited’, ’inaccurate’ generally describe work which does not meet the Learning Outcomes.
  • Work with a colleague on the grading rubric – discussion helps articulation of difficult ideas.
  • As you move up the grades, avoid introducing new criteria into the descriptors. The main aspects of the criterion should follow up the levels, with an increasing demand in that particular aspect.

The following are some examples of ways you could move up the levels:

increasing the degree of autonomy required e.g. the level of independence or decision-making needed, initiative.

broadening the situation/context in which the learner applies the learning e.g. a pass might relate specifically to in-module teaching, whilst higher grades might draw on wider experiences/sources.

increasing the range/number of elements you expect the learner to use e.g. using a wider range of presentation techniques, combining more problem-solving techniques, using a combination of skill elements.

When developing assessment criteria, it may help to consider the following:

  • How many criteria will you have? Using too many criteria can make the marking process complex and lead to a more rigid approach e.g. more than 7or 8 per module may be difficult to work with. Efficiency and effectiveness are increased by not having too many.
  • How many grades of achievement will you have? A 5-point scale is normally broad enough to mark the full range of learner work. Having too many levels may result in an averaging out of the marks, so that, for example, all learners are awarded 55%. Research has shown that reliability of marking between tutors is increased by using a smaller number of bands.
  • How will you relate them to the course learning outcomes? Will each criterion relate directly to each individual outcome, or will you group some outcomes together by theme? Several Learning Outcomes could be linked by a single assessment criterion.
  • How will you avoid telling the learner what to do? When writing criteria, you need to ensure you are not telling learners precisely what to include in their assignment, e.g. an assessment criterion should be ‘Your work will be judged on the relevant application of key theories’, rather than ‘Your work will be judged on their reference to the theories of X, Y and Z’ It is useful to think of the challenge that you are presenting to the learners, and ensure that your criteria do not diminish that challenge.
  • How will you avoid writing criteria which could restrict or restrain learners? Creativity/spontaneity/originality should be rewarded. Your assessment criteria should be explicit about this. This may be particularly important at FHEQ levels 6/7.
  • Weighting criteria – What is the relative importance of each criterion – are they all of equal worth or do you value some more highly? A list or grid of assessment criteria with grades of achievement may give learners the impression that all criteria are of equal value. If this is not the case, you need to illustrate and clarify the relative importance of different criteria. This will influence students’ decisions about how to spread their time and effort.
  • Aggregated scores – How can you ensure you maintain a holistic approach to assessing the work, and avoid reducing a complex issue to segments, with aggregated numerical scores? Each criterion should work together with others and so contributes to the whole picture, avoiding a reductionist approach. Options are to include a specific criterion relating to overall competence, or to weight criteria appropriately. Again, it is important to acknowledge that criteria provide clear, but not exact indicators for assessing. If criteria are over specific or too numerous, there could be a danger of constraining both the learners and yourself.
  • Criteria for exams – Whatever the mode of assessment, there is a need to be explicit about the criteria used to judge the work and to link these to the Learning Outcomes which are being assessed. This is equally important when using examinations as the chosen assessment mode.

It might be helpful to use the following checklist:

criteria based assessment presentation

Timely and targeted feedback is important for learning and progression. This time-consuming process may be made more efficient by focussing the feedback around the criteria.

 Options could be:

  •  Tell the learners in advance on which criteria you will be giving feedback.
  • Return a copy of the assessment grid/criteria sheet to the learner, with their level of achievement highlighted. It should be noted that this on its own does not suffice as feedback, and individually tailored comments should accompany any such sheet or rubric.
  • Peer to peer or self-evaluation: As a practice activity, students review their own or each others’ work and give feedback using the criteria – with support from the teacher.

 How can students evaluate the quality of their own and others’ work/achievement?

Independent learners need to develop skills of self-evaluation and reflection, as key academic and professional skills. Learners need guidance, support and practice in:

  • making informed evaluation of the quality of their own and others’ work
  • providing thoughtful and constructive feedback.

Clear and understandable assessment criteria are essential for learners to make informed and valid judgements in self and peer assessment, and they need to openly discuss and debate the meaning and interpretation of the criteria.

Useful strategies include:

  • self-assessment – learners undertake a self-assessment exercise, mark their work in accordance with the identified criteria, submitting it with their work.
  • learners suggest ways in which they could improve upon their mark to provide feed forward for subsequent learning tasks.
  • using pairs – learners explaining their thinking to each other helps clarify
  • the learner group identifies and negotiates their own criteria e.g. for a presentation. Individually, learners identify 5 criteria, then share, discuss, and agree with a partner. These criteria are then posted up and a group discussion clarifies meaning and merging of ideas. The group may also prioritise the criteria. This helps learners identify what makes good performance.
  • learners (individually or group) generate and add their own criteria to those the teacher has identified.
  • Our Mission

Using Portfolios to Assess Student Learning

Allowing students to select the work that they feel is most representative of their learning is a powerful way to evaluate student knowledge.

Photo of teacher and student looking at portfolio

How should a teacher, school, or system determine the purpose of a portfolio? It depends on what they are using them for. It is essential to know your community so you can select the type of portfolio that will serve it best. You will also need to review state requirements and how best to fulfill them, especially if the portfolio is going to take the place of something more traditional. Consider the following questions:

•   What are you hoping to achieve with this portfolio?

•   What skills and content do you want students to demonstrate?

•   Will the portfolio be assessed? If so, how?

•   What criteria will show successful completion?

•   What does exemplary work look like?

•   What kind of variety will be acceptable?

•   In what format should portfolios be submitted?

•   Where will the portfolio be housed, and will it be digital or hard copy?

•   Who will have access to the portfolio once it is created?

•   How much autonomy does any individual teacher or student have when creating a portfolio?

•   What kinds of buckets will students have to show learning? (Buckets are the overarching competencies in which multiple subject areas can fit.)

•   What standards will be demonstrated through the portfolio? Will students need to present evidence of learning or just reflect on individual selections?

•   What process will you use to teach students to “collect, select, reflect, connect”?

After asking these questions, it is crucial to backward-plan from what the successful candidate will contribute. What kinds of artifacts will show the success criteria as planned? How many different opportunities will they have to show that skill or knowledge in class? Once we know what we want our outcomes to be, it is easier to ensure that we are teaching for success. Teachers should ask, “What do kids know and what knowledge are they missing, and how will I fill the gaps?” Leaders should ask, “What do teachers know, and how much professional learning do we need to provide to ensure consistency if we are implementing portfolios together as a school or system?”

Student led assessment book cover

Co-Constructing Selection Criteria

Once you’ve identified a portfolio type and determined a purpose, you can start getting more granular. How do individual class objectives meet the needs of generic determined buckets, and how can you ensure students co-construct the portfolio selection criteria? (Remember, generic buckets are the larger competencies that all classes and content areas will fit in. They are “generic” because they don’t get into specific standards.) Students will need to express the end goal of their portfolio first and then come up with a specific checklist to follow while deciding what to include.

Creating a Professional Portfolio as a Model

It is always helpful to complete an assessment you are asking students to do and identify any stumbling blocks they may encounter as well as making sure every step of the assignment is taught in advance. One way to ensure this is to create a professional portfolio that mirrors the kind of portfolio students are asked to create.

Portfolio Assessment Versus Traditional Testing

Standardized testing seeks to level the playing field for all students. Of course, most educators understand that such tests do nothing of the sort.

Standardized tests privilege the few who may be good at test taking or have the opportunity to work with tutors. Worse, they are often misleading and biased in favor of certain social and cultural experiences. (For example, when I took the New York State English Regents exam, one of the questions had to do with vaudeville, a long-outdated form of theatrical entertainment that students from other cultures might never even have heard of.) Other forms of testing would better illustrate the depth and understanding of student learning while also giving students more agency and decreasing their anxiety.

If educators genuinely want to know what students know and can do, they should have a universal portfolio system in place that allows students to gather evidence of learning over time. This can be implemented at the national or state level. Educators at every level should be included in the development process to devise the success criteria and the skill sets to be demonstrated over time. If we gather the right stakeholders to make sound decisions, all students will benefit.

Once criteria have been determined, students can start collecting learning from their earliest educational experiences. They can be issued an online account where work can be scanned and collected each year. This information can be shared with parents, students, and future teachers to help inform instruction. Rather than produce test scores that often don’t highlight the depth of student learning, these online portfolios provide a more accurate picture of how students are doing.

Students can be taught to select work they are proud of for their portfolios and to express why they have selected it. Schools and/or states can determine how many pieces should be selected each year, and students can have ownership over what they believe best displays their learning. Obviously, teachers will be supporting students throughout this process.

After students make their selections, they should write standards-based reflections about what the pieces demonstrate and what they learned throughout the process. Because younger students won’t necessarily understand how to do this right away, teachers should scaffold the process a little longer and adjust the language of the standards to be more kid-friendly. Then the feedback they provide on students’ selections will be in a language the students understand, ensuring they’ll be able to progressively do more on their own as the year goes on.

At the end of each school year, students should discuss the goals they’ve set and met as well as new goals to be worked on in the following year. Students can learn the language to use for these discussions at a young age. In the goals, students should talk about the areas where they see progress and then decide what they want to work on moving forward.

Each content area should have a subfolder in the portfolio. In addition to content-specific goals and learning related to academics, students should also be able to demonstrate interpersonal skills like communication, collaboration, and self- regulation. Rubrics can be developed to help students assess their learning levels. Graduation criteria, as well as college- and career-readiness criteria, should also be included.

One high school I taught at used to have exit presentations where students had to defend their learning and express why they felt they were ready for their next learning journey. Instead of testing, consider implementing these presentations at the end of each school year. Students will get comfortable sharing what they have learned and asking questions to help clarify that learning. Students, teachers, and leaders can sit on the panels during these presentations. Throughout the school year, students can be taught to lead their conferences, and their parents can sit with them to review the portfolio work. Advisory teachers should be there to provide support, too. In the younger grades, where there is only one teacher, students should be included in the conferences and not left at home. It is important that conversations about learning be conducted with the learner present.

Learning is nuanced, and assessment should be, too. Be sure to offer students the opportunity to be seen as whole people who can demonstrate different skills and knowledge in many ways over time.

Source:  Student-Led Assessment: Promoting Agency and Achievement Through Portfolios and Conferences  (pp. 49–52), by S. Sackstein, Arlington, VA:  ASCD. © 2024 by ASCD. Reprinted with permission. All rights reserved.

Quality assessment, HPTLC-DPPH, analytical quality by design based HPTLC method development for estimation of piperine in piper species and marketed formulations

  • Balekundri, Amruta
  • Hurkadale, Pramod J.
  • Hegde, Harsha

Background. Quality assessment of herbs is an essential aspect. In this study HPTLC method is developed by employing the quality based optimization approach with the aid of analytical quality by design tools and the quality assessment of the Piper species. Result. The Quality assessment was carried out for the piper species which was under the acceptance criteria of Ayurvedic Pharmacopeia of India. Eco-friendly and Quality based HPTLC method was developed and validated for Piperine in Extracts as well as marketed formulation. Further the antioxidant potential was checked by the HPTLC-DPPH (2,2-diphenyl-1-picrylhydrazyl) method. Conclusion. Quality assessed extracts and Analytical quality based HPTLC method which was eco-friendly, simple and reliable was developed and validated.

  • Quality assessment;
  • HPTLC-DPPH;
  • AGREE scale

IMAGES

  1. Evaluation Criteria

    criteria based assessment presentation

  2. 4. Assessment

    criteria based assessment presentation

  3. Employee Assessment Criteria Table Ppt Powerpoint Presentation Design

    criteria based assessment presentation

  4. How Do You Evaluate A Presentation

    criteria based assessment presentation

  5. Selected Assessment Criteria for Presentation.

    criteria based assessment presentation

  6. Designing Assessments

    criteria based assessment presentation

VIDEO

  1. Transitioning to OBA Part Two

  2. Summative assessment presentation//Puri jankari iss video mai hai //kreative classes😃

  3. Final Assessment Presentation (DTP 60104)

  4. Importance & Principles Of Assessment || B.ED. || Assessment For Learning || By Heena

  5. Information Systems Strategy and Innovation Analyzing Perceptions Using Multiple Criteria Decision

  6. Course on Principles for reviewing Environmental Impact Assessments. IADB. Stages of the EIA Process

COMMENTS

  1. PDF Criteria for Evaluating an Individual Oral Presentation

    you to achieve sustained eye contact throughout the presentation. Volume Adjust the volume for the venue. Work to insure that remote audience members can clearly hear even the inflectional elements in your speech. Inflection Adjust voice modulation and stress points to assist the audience in identifying key concepts in the presentation.

  2. PDF Developing and Using Assessment Criteria and Rubrics

    The criteria are informed by both the learning outcomes and the nature of the task. For instance, if the learning outcome is to 'communicate ideas in a logical way' and the assessed task is a presentation, five general criteria could be identified: content1. ; 2. argued courseof action; 3. organisation; 4. verbal presentation aspects; and ...

  3. Step 4: Develop Assessment Criteria and Rubrics

    Step 4: Develop Assessment Criteria and Rubrics. Just as we align assessments with the course learning objectives, we also align the grading criteria for each assessment with the goals of that unit of content or practice, especially for assignments than cannot be graded through automation the way that multiple-choice tests can.

  4. PDF Research Presentation Rubrics

    The goal of this rubric is to identify and assess elements of research presentations, including delivery strategies and slide design. • Self-assessment: Record yourself presenting your talk using your computer's pre-downloaded recording software or by using the coach in Microsoft PowerPoint. Then review your recording, fill in the rubric ...

  5. PDF Presentation Evaluation Criteria

    The speaker presents ideas in a clear manner. The speaker states one point at a time. The speaker fully develops each point. The presentation is cohesive. The presentation is properly focused. A clear train of thought is followed and involves the audience. The speaker makes main points clear. The speaker sequences main points effectively.

  6. Rubric Best Practices, Examples, and Templates

    A rubric is a scoring tool that identifies the different criteria relevant to an assignment, assessment, or learning outcome and states the possible levels of achievement in a specific, clear, and objective way. Use rubrics to assess project-based student work including essays, group projects, creative endeavors, and oral presentations.

  7. PDF Oral Presentation Evaluation Rubric

    Organization. Logical, interesting, clearly delineated themes and ideas. Generally clear, overall easy for audience to follow. Overall organized but sequence is difficult to follow. Difficult to follow, confusing sequence of information. No clear organization to material, themes and ideas are disjointed. Evaluation.

  8. A Standardized Rubric to Evaluate Student Presentations

    INTRODUCTION. Evaluations are important in the process of teaching and learning. In health professions education, performance-based evaluations are identified as having "an emphasis on testing complex, 'higher-order' knowledge and skills in the real-world context in which they are actually used." 1 Objective structured clinical examinations (OSCEs) are a common, notable example. 2 On ...

  9. Assessment Rubrics

    Assessment Rubrics. A rubric is commonly defined as a tool that articulates the expectations for an assignment by listing criteria, and for each criteria, describing levels of quality (Andrade, 2000; Arter & Chappuis, 2007; Stiggins, 2001). Criteria are used in determining the level at which student work meets expectations.

  10. Rubric formats for the formative assessment of oral presentation skills

    Acquiring complex oral presentation skills is cognitively demanding for students and demands intensive teacher guidance. The aim of this study was twofold: (a) to identify and apply design guidelines in developing an effective formative assessment method for oral presentation skills during classroom practice, and (b) to develop and compare two analytic rubric formats as part of that assessment ...

  11. PDF Criterion Referenced Assessment As a Guide to Learning

    Brown defined criterion referenced assessment as, An evaluative description of the qualities which are to be assessed (e.g. an account of what pupils know and can do) without reference to the performance of others. (Brown, 1988, p.4.) Assessments based on descriptions of levels of performance can be used to provide feedback and to inform future ...

  12. PDF Oral Presentation Evaluation Criteria and Checklist

    ORAL PRESENTATION EVALUATION CRITERIA AND CHECKLIST. talk was well-prepared. topic clearly stated. structure & scope of talk clearly stated in introduction. topic was developed in order stated in introduction. speaker summed up main points in conclusion. speaker formulated conclusions and discussed implications. was in control of subject matter.

  13. Oral presentations

    Oral presentations are often combined with other modes of assessment; for example oral presentation of a project report, oral presentation of a poster, commentary on a practical exercise, etc. Also common is the use of PechaKucha, a fast-paced presentation format consisting of a fixed number of slides that are set to move on every twenty ...

  14. Assessment Criteria and Rubrics

    Assessment criteria communicate to students the knowledge, skills and understanding (thus in line with the expected module learning outcomes) the assessors expect from students to evidence in any given assessment task. ... (task), i.e., presentation, written task, etc. Thus, the criteria outlines what we expect from our students (based on ...

  15. Rubrics for Assessment

    The table below illustrates a simple grading rubric with each of the four elements for a history research paper. Sample rubric demonstrating the key elements of a rubric. Criteria. Excellent (3 points) Good (2 points) Poor (1 point) Number of sources. Ten to twelve.

  16. Criterion-Referenced Assessment: Evaluating Student Learning Against

    The assessment includes a rubric that outlines specific criteria for effective writing, including organization, clarity, grammar and mechanics, and use of evidence. The results of the assessment show that while most students are proficient in organization and clarity, many need improvement in grammar and mechanics, and use of evidence.

  17. PDF Guidelines for Writing Effective Assessment Criteria

    Before writing assessment criteria it is important to understand how assessment criteria relate to course design. The following diagram illustrates how assessment criteria both inform and are informed by learning outcomes. Figure 1: Planning a unit of study (adapted from Rosie Bingham, 2002) Writing Effective Assessment Criteria PREPARING TO ...

  18. PDF Assessing Graduate Attributes: Building a Criteria-Based Competency Model

    presentation of a criterion reference model of assessment, its theoretical basis as well as its potential application for specific GAs. In such a model of assessment, the performance of the examinees - in the case of this article, instructor and students - is compared to a set of criteria defined beforehand. Linn and Gronlund (2000) define

  19. Authentic assessment: an oral pitch

    Adaptable resources for teaching with technology Authentic assessment: an oral pitch Argue a point to a specified audience using appropriate verbal skills Present a clearly constructed solution to a problem Review, analyse and reflect upon the success of a presentation through feedback, to inform future opportunities Suitable for This can be an individual or group …

  20. How to write assessment criteria

    Steps to defining assessment criteria. 5.1 Start with the course learning Outcomes. The criteria should cover all the course learning outcomes and there is a very close relationship between them and the assessment criteria - many of the words will be repeated. The difference is that the assessment criteria describe the level of performance ...

  21. PDF SAMPLE ORAL PRESENTATION MARKING CRITERIA

    3. PEER ASSESSMENT OF GROUP PRESENTATIONS BY MEMBERS OF TEAM Use the criteria below to assess your contribution to the group presentation as well as the contribution of each of your teammates. 0 = no contribution 1 = minor contribution 2 = some contribution, but not always effective/successful 3 = some contribution, usually effective/successful

  22. Criteria for Effective Assessment in Project-Based Learning

    Criteria for Effective Assessment in Project-Based Learning. Andrew Miller from the Buck Institute shares some factors for success. By Andrew Miller. February 28, 2011. One of the greatest potentials for PBL is that it calls for authentic assessment. In a well-designed PBL project, the culminating product is presented publicly for a real audience.

  23. Standards-Based Portfolio Assessment

    Creating a Professional Portfolio as a Model. It is always helpful to complete an assessment you are asking students to do and identify any stumbling blocks they may encounter as well as making sure every step of the assignment is taught in advance. One way to ensure this is to create a professional portfolio that mirrors the kind of portfolio ...

  24. Land

    Due to the increasingly complex global climatic environment and the rapid development of China's urban construction, China's historical and cultural cities are experiencing an external impact as well as internal fragility. Representing the capacity of the urban system to address impact and pressure, resilience can effectively guarantee the sustainable development of historical and cultural ...

  25. Unveiling the implementation barriers to the digital transformation in

    Digital transformation has been regarded as a primary styrategy to promote transitions in diverse fields, but industry pioneers believe that the existing barriers may hamper the speed of digital transformation. Hence, this paper presents a synthetical decision model integrating the weighted Heronian mean aggregation (WHMA) operator, the Level-Based Weight Assessment (LBWA) model, the CRITIC ...

  26. Quality assessment, HPTLC-DPPH, analytical quality by design based

    Background. Quality assessment of herbs is an essential aspect. In this study HPTLC method is developed by employing the quality based optimization approach with the aid of analytical quality by design tools and the quality assessment of the Piper species. Result. The Quality assessment was carried out for the piper species which was under the acceptance criteria of Ayurvedic Pharmacopeia of ...