Center for Teaching and Learning

Step 4: develop assessment criteria and rubrics.

Just as we align assessments with the course learning objectives, we also align the grading criteria for each assessment with the goals of that unit of content or practice, especially for assignments than cannot be graded through automation the way that multiple-choice tests can. Grading criteria articulate what is important in each assessment, what knowledge or skills students should be able to demonstrate, and how they can best communicate that to you. When you share grading criteria with students, you help them understand what to focus on and how to demonstrate their learning successfully. From good assessment criteria, you can develop a grading rubric .

Develop Your Assessment Criteria | Decide on a Rating Scale | Create the Rubric

Developing Your Assessment Criteria

Good assessment criteria are

  • Clear and easy to understand as a guide for students
  • Attainable rather than beyond students’ grasp in the current place in the course
  • Significant in terms of the learning students should demonstrate
  • Relevant in that they assess student learning toward course objectives related to that one assessment.

To create your grading criteria, consider the following questions:

  • What is the most significant content or knowledge students should be able to demonstrate understanding of at this point in the course?
  • What specific skills, techniques, or applications should students be able to use to demonstrate using at this point in the course?
  • What secondary skills or practices are important for students to demonstrate in this assessment? (for example, critical thinking, public speaking skills, or writing as well as more abstract concepts such as completeness, creativity, precision, or problem-solving abilities)
  • Do the criteria align with the objectives for both the assessment and the course?

Once you have developed some ideas about the assessment’s grading criteria, double-check to make sure the criteria are observable, measurable, significant, and distinct from each other.

Assessment Criteria Example Using the questions above, the performance criteria in the example below were designed for an assignment in which students had to create an explainer video about a scientific concept for a specified audience. Each elements can be observed and measured based on both expert instructor and peer feedback, and each is significant because it relates to the course and assignment learning goals.

assignment assessment criteria

Additional Assessment Criteria Resources Developing Grading Criteria (Vanderbilt University) Creating Grading Criteria (Brown University) Sample Criteria (Brown University) Developing Grading Criteria (Temple University)

Decide on a Rating Scale

Deciding what scale you will use for an assessment depends on the type of learning you want students to demonstrate and the type of feedback you want to give students on this particular assignment or test. For example, for an introductory lab report early in the semester, you might not be as concerned with advanced levels of precision as much as correct displays of data and the tone of the report; therefore, grading heavily on copy editing or advanced analysis would not be appropriate. The criteria would likely more rigorous by the end of the semester, as you build up to the advanced level you want students to reach in the course.

Rating scales turn the grading criteria you have defined into levels of performance expectations for the students that can then be interpreted as a letter, number, or level. Common rating scales include

  • A, B, C, etc. (without or without + and -)
  • 100 point scale with defined cut-off for a letter grade if desired (ex. a B = 89-80; or a B+ = 89-87, B = 86-83, B- = 82-80)
  • Yes or no, present or not present (if the rubric is a checklist of items students must show)
  • below expectations, meets expectations, exceeds expectations
  • not demonstrated, poor, average, good, excellent

Once you have decided on a scale for the type of assignment and the learning you want students to demonstrate, you can use the scale to clearly articulate what each level of performance looks like, such as defining what A, B, C, etc. level work would look like for each grading criteria. What would distinguish a student who earns a B from one who earns a C? What would distinguish a student who excelled in demonstrating use of a tool from a student who clearly was not familiar with it? Write these distinctions out in descriptive notes or brief paragraphs.

​ Ethical Implications of Rating Scales There are ethical implications in each of these types of rating skills. On a project worth 100 points, what is the objective difference between earning an 85 or and 87? On an exceeds/meets/does not meet scale, how can those levels be objectively applied? Different understandings of "fairness" can lead to several ways of grading that might disadvantage some students.  Learn more about equitable grading practices here.

Create the Rubric

Rubrics Can Make Grading More Effective

  • Provide students with more complete and targeted feedback
  • Make grading more timely by enabling the provision of feedback soon after assignment is submitted/presented.
  • Standardize assessment criteria among those assigning/assessing the same assignment.
  • Facilitate peer evaluation of early drafts of assignment.

Rubrics Can Help Student Learning

  • Convey your expectations about the assignment through a classroom discussion of the rubric prior to the beginning of the assignment
  • Level the playing field by clarifying academic expectations and assignments so that all students understand regardless of their educational backgrounds.(e.g. define what we expect analysis, critical thinking, or even introductions/conclusions should include)
  • Promote student independence and motivation by enabling self-assessment
  • Prepare students to use detailed feedback.

Rubrics Have Other Uses:

  • Track development of student skills over several assignments
  • Facilitate communication with others (e.g. TAs, communication center, tutors, other faculty, etc)
  • Refine own teaching skills (e.g. by responding to common areas of weaknesses, feedback on how well teaching strategies are working in preparing students for their assignments).

In this video, CTL's Dr. Carol Subino Sullivan discusses the value of the different types of rubrics.

Many non-test-based assessments might seem daunting to grade, but a well-designed rubric can alleviate some of that work. A rubric is a table that usually has these parts:  

  • a clear description of the learning activity being assessed
  • criteria by which the activity will be evaluated
  • a rating scale identifying different levels of performance
  • descriptions of the level of performance a student must reach to earn that level.  

When you define the criteria and pre-define what acceptable performance for each of those criteria looks like ahead of time, you can use the rubric to compare with student work and assign grades or points for each criteria accordingly. Rubrics work very well for projects, papers/reports, and presentations , as well as in peer review, and good rubrics can save instructors and TAs time when grading .  

Sample Rubrics This final rubric for the scientific concept explainer video combines the assessment criteria and the holistic rating scale:

assignment assessment criteria

When using this rubric, which can be easily adapted to use a present/not present rating scale or a letter grade scale, you can use a combination of checking items off and adding written (or audio/video) comments in the different boxes to provide the student more detailed feedback. 

As a second example, this descriptive rubric was used to ask students to peer assess and self-assess their contributions to a collaborative project. The rating scale is 1 through 4, and each description of performance builds on the previous. ( See the full rubric with scales for both product and process here. This rubric was designed for students working in teams to assess their own contributions to the project as well as their peers.)

assignment assessment criteria

Building a Rubric in Canvas Assignments You can create rubrics for assignments and discussions boards in Canvas. Review these Canvas guides for tips and tricks. Rubrics Overview for Instructors What are rubrics?  How do I align a rubric with a learning outcome? How do I add a rubric to an assignment? How do I add a rubric to a quiz? How do I add a rubric to a graded discussion? How do I use a rubric to grade submissions in SpeedGrader? How do I manage rubrics in a course?

Additional Resources for Developing Rubrics Designing Grading Rubrics  (Brown University) Step-by-step process for creating an effective, fair, and efficient grading rubric. 

Creating and Using Rubrics  (Carnegie Mellon University) Explores the basics of rubric design along with multiple examples for grading different types of assignments.

Using Rubrics  (Cornell University) Argument for the value of rubrics to support student learning.

Rubrics  (University of California Berkeley) Shares "fun facts" about rubrics, and links the rubric guidelines from many higher ed organizations such as the AAC&U.

Creating and Using Rubrics  (Yale University) Introduces different styles of rubrics and ways to decide what style to use given your course's learning goals.

Best Practices for Designing Effective Resources (Arizona State University) Comprehensive overview of rubric design principles.

  Return to Main Menu | Return to Step 3 | Go to Step 5 Determine Feedback Strategy

Accessibility Information

Download Microsoft Products   >      Download Adobe Reader   >

The Teaching Knowledge Base

  • Digital Teaching and Learning Tools
  • Assessment and Feedback Tools

Assessment Criteria and Rubrics

An introduction.

This guide is an introduction to:

  • Writing an assessment brief with clear assessment criteria and rubrics
  • Grading tools available in Turnitin enabling the use of criteria and rubrics in marking.

Clear and explicit assessment criteria and rubrics are meant to increase the transparency of the assessment and aim to develop students into ‘novice assessors’ (Gipps, 1994) and facilitating deep learning.  Providing well-designed criteria and rubrics, contributes to communicating assessment requirements that can be more inclusive to all (including markers) regardless of previous learning experiences, and or individual differences in language, cultural and educational background.  It also facilitates the development of self-judgment skills (Boud & Falchikov, 2007).

  • Assessment brief
  • Assessment criteria
  • Assessment rubric
  • Guidance in how to create rubrics and grading forms
  • Guidance on how to create a rubric in Handin

Terminology Explored

The terms ‘assessment brief’ , ‘assessment criteria’ and ‘assessment rubrics’ however, are often used interchangeably and that may lead to misunderstandings and impact on the effectiveness of the design and interpretation of the assessment brief.  Therefore, it is important to first clarify these terms:

Assessment Brief

An assessment (assignment) brief refers to the instructions provided to communicate the requirements and expectations of assessment tasks, including the assessment criteria and rubrics to students.  The brief should clearly outline which module learning outcomes will assessed in the assignment.

NOTE: If you are new to writing learning outcomes, or need a refresher, have a look at Baume’s guide to “Writing and using good learning outcomes”, (2009).  See list of references.

When writing an assessment brief, it may be useful to consider the following questions with regards to your assessment brief:

  • Have you outlined clearly what type of assessment you require students to complete?  For example, instead of “written assessment”, outline clearly what type of written assessment you require from your students; is it a report, a reflective journal, a blog, presentation, etc.  It is also recommended to give a breakdown of the individual tasks that make up the full assessment within the brief, to ensure transparency.
  • Is the purpose of the assessment immediately clear to your students, i.e. why the student is being asked to do the task?  It might seem obvious to you as an academic, but for students new to academia and the subject discipline, it might not be clear.  For example, explain why they have to write a reflective report or a journal and indicate which module learning outcomes are to be assessed in this specific assessment task.
  • Is all the important task information clearly outlined, such as assessment deadlines, word count, criteria and further support and guidance?

Assessment Criteria

Assessment criteria communicate to students the knowledge, skills and understanding (thus in line with the expected module learning outcomes) the assessors expect from students to evidence in any given assessment task.  To write a good set of criteria, the focus should be on the characteristics of the learning outcomes that the assignment will evidence and not only consider the characteristics of the assignment (task), i.e., presentation, written task, etc.

Thus, the criteria outlines what we expect from our students (based on learning outcomes), however it does not in itself make assumptions about the actual quality or level of achievement (Sadler, 1987: 194) and needs to be refined in the assessment rubric.  

When writing an assessment brief, it may be useful to consider the following questions with regards to the criteria that will be applied to assess the assignment:

  • Are your criteria related and aligned with the module and (or) the course learning outcomes?
  • What are the number of criteria you will assess in any particular task?  Consider how realistic and achievable this may be.
  • Are the criteria clear and have you avoided using any terms not clear to students (academic jargon)?
  • Are the criteria and standards (your quality definitions) aligned with the level of the course?   For guidance, consider revisiting the  Credit Level Descriptors (SEEC, 2016) and the QAA Subject Benchmarks in Framework for the Higher Education Qualifications that are useful starting points to consider.

Assessment Rubric

The assessment rubric, forms part of a set of criteria and refers specifically to the “levels of performance quality on the criteria.” (Brookhart & Chen, 2015, p. 343)

Generally, rubrics are categorised into two categories, holistic and or analytic. A holistic rubric assesses an assignment as a whole and is not broken down into individual assessment criteria.  For the purpose of this guidance, the focus will be on an analytic rubric that provides separate performance descriptions for each criterion.

An assessment rubric is therefore a tool used in the process of assessing student work that usually includes essential features namely the:  

  • Scoring strategy – Can be numerical of qualitative, associated with the levels of mastery (quality definitions). (Shown as SCALE in Turnitin)
  • Quality definitions (levels of mastery) – Specify the levels of achievement / performance in each criterion.

 (Dawson, 2017).

The figure below, is an example of the features of a complete rubric including the assessment criteria. 

When writing an assessment brief, it may be useful to consider the following questions with regards to firstly, the assessment brief, and secondly, the criteria and associated rubrics.

  • Does your scoring strategy clearly define and cover the whole grading range?  For example, do you distinguish between the distinctions (70-79%) and 80% and above?
  • Are the words and terms used to indicate level of mastery, clearly outlining and enabling students to distinguish between the different judgements?  For example, how do you differentiate between work that is outstanding, excellent and good?
  • Is the chosen wording in your rubric too explicit?  It should be explicit but at the same time not overly specific to avoid students adopting a mechanistic approach to your assignment.  For example, instead of stating a minimum number references, consider stating rather effectiveness or quality of the use of literature, and or awareness or critical analysis of supporting literature.

NOTE: For guidance across Coventry University Group on writing criteria and rubrics, follow the links to guidance.

 POST GRADUATE Assessment criteria and rubrics (mode R)

 UNDER GRADUATE Assessment criteria and rubrics (mode E)

Developing Criteria and Rubrics within Turnitin

Within Turnitin, depending on the type of assessment, you have a choice between four grading tools:

  • Qualitative Rubric – A rubric that provides feedback but has no numeric scoring.  More descriptive than measurable.  This rubric is selected by choosing the ‘0’ symbol at the base of the Rubric.
  • Standard Rubric – Used for numeric scoring.  Enter scale values for each column (rubric score) and percentages for each criteria row, combined to be equal to 100%.  This rubric can calculate and input the overall grade.  This rubric is selected by choosing the % symbol at the base of the Rubric window.
  • Custom Rubric – Add criteria (row) and descriptive scales (rubric), when marking enter (type) any value directly into each rubric cell.  This rubric will calculate and input the overall grade.  This rubric is selected by choosing the ‘Pencil’ symbol at the base of the Rubric window.
  • Grading form – Can be used with or without numerical score.  If used without numerical score, then it is more descriptive feedback.  If used with numerical scoring, this can be added together to create an overall grade.  Note that grading forms can be used without a ‘paper assignment’ being submitted, for example, they can be used to assess work such as video submission, work of art, computer programme or musical performance.

Guidance on how to Create Rubric and Grading Forms

Guidance by Turnitin:

https://help.turnitin.com/feedback-studio/turnitin-website/instructor/rubric-scorecards-and-grading-forms/creating-a-rubric-or-grading-form-during-assignment-creation.htm

University of Kent – Creating and using rubrics and grading form (written guidance):

https://www.kent.ac.uk/elearning/files/turnitin/turnitin-rubrics.pdf

Some Examples to Explore

It is useful to explore some examples in Higher Education, and the resource developed by UCL of designing generic assessment criteria and rubrics from level 4 to 7, is a good starting point.

Guidance on how to Create Rubric in Handin

Within Handin, depending on the type of assessment, you have a choice between three grading tools, see list below, as well as the choice to use “free-form” grading that allows you to enter anything in the grade field when grading submissions.

  • None = qualitative
  • Range = quantitative – can choose score from range
  • Fixed = quantitative – one score per level

Guide to Handin: Creating ungraded (“free-form”) assignments

https://aula.zendesk.com/hc/en-us/articles/360053926834

Guide to Handin: Creating rubrics https://aula.zendesk.com/hc/en-us/articles/360017154820-How-can-I-use-Rubrics-for-Assignments-in-Aula-

References and Further Reading

Baume, D (2009) Writing and using good learning outcomes. Leeds Metropolitan University. ISBN 978-0-9560099-5-1 Link to Leeds Beckett Repository record: http://eprints.leedsbeckett.ac.uk/id/eprint/2837/1/Learning_Outcomes.pdf

Boud, D & Falchikov, N. (2007) Rethinking Assessment in Higher Education. London: Routledge.

Brookhart, S.M. & Chen, F. (2015) The quality and effectiveness of descriptive rubrics, Educational Review, 67:3, pp.343-368.  http://dx.doi.org/10.1080/00131911.2014.929565

Dawson, P. (2017) Assessment rubrics: Towards clearer and more replicable design, research and practice. Assessment & Evaluation in Higher Education, 42(3), pp.347-360. https://doi.org/10.1080/02602938.2015.1111294

Gipps, C.V. (1994) Beyond testing: Towards a theory of educational assessment. Psychology Press.

Sadler, D.R. (1987) Specifying and promulgating achievement standards. Oxford Review of Education, 13(2), pp.191-209.

SEEC (2016) Credit Level Descriptors. Available: http://www.seec.org.uk/wp-content/uploads/2016/07/SEEC-descriptors-2016.pdf

UK QAA Quality Code (2014) Part A – Setting and Maintaining Academic Standards. Available: https://www.qaa.ac.uk/docs/qaa/quality-code/qualifications-frameworks.pdf

Was this article helpful?

Assessment Rubrics

A rubric is commonly defined as a tool that articulates the expectations for an assignment by listing criteria, and for each criteria, describing levels of quality (Andrade, 2000; Arter & Chappuis, 2007; Stiggins, 2001). Criteria are used in determining the level at which student work meets expectations. Markers of quality give students a clear idea about what must be done to demonstrate a certain level of mastery, understanding, or proficiency (i.e., "Exceeds Expectations" does xyz, "Meets Expectations" does only xy or yz, "Developing" does only x or y or z). Rubrics can be used for any assignment in a course, or for any way in which students are asked to demonstrate what they've learned. They can also be used to facilitate self and peer-reviews of student work.

Rubrics aren't just for summative evaluation. They can be used as a teaching tool as well. When used as part of a formative assessment, they can help students understand both the holistic nature and/or specific analytics of learning expected, the level of learning expected, and then make decisions about their current level of learning to inform revision and improvement (Reddy & Andrade, 2010). 

Why use rubrics?

Rubrics help instructors:

Provide students with feedback that is clear, directed and focused on ways to improve learning.

Demystify assignment expectations so students can focus on the work instead of guessing "what the instructor wants."

Reduce time spent on grading and develop consistency in how you evaluate student learning across students and throughout a class.

Rubrics help students:

Focus their efforts on completing assignments in line with clearly set expectations.

Self and Peer-reflect on their learning, making informed changes to achieve the desired learning level.

Developing a Rubric

During the process of developing a rubric, instructors might:

Select an assignment for your course - ideally one you identify as time intensive to grade, or students report as having unclear expectations.

Decide what you want students to demonstrate about their learning through that assignment. These are your criteria.

Identify the markers of quality on which you feel comfortable evaluating students’ level of learning - often along with a numerical scale (i.e., "Accomplished," "Emerging," "Beginning" for a developmental approach).

Give students the rubric ahead of time. Advise them to use it in guiding their completion of the assignment.

It can be overwhelming to create a rubric for every assignment in a class at once, so start by creating one rubric for one assignment. See how it goes and develop more from there! Also, do not reinvent the wheel. Rubric templates and examples exist all over the Internet, or consider asking colleagues if they have developed rubrics for similar assignments. 

Sample Rubrics

Examples of holistic and analytic rubrics : see Tables 2 & 3 in “Rubrics: Tools for Making Learning Goals and Evaluation Criteria Explicit for Both Teachers and Learners” (Allen & Tanner, 2006)

Examples across assessment types : see “Creating and Using Rubrics,” Carnegie Mellon Eberly Center for Teaching Excellence and & Educational Innovation

“VALUE Rubrics” : see the Association of American Colleges and Universities set of free, downloadable rubrics, with foci including creative thinking, problem solving, and information literacy. 

Andrade, H. 2000. Using rubrics to promote thinking and learning. Educational Leadership 57, no. 5: 13–18. Arter, J., and J. Chappuis. 2007. Creating and recognizing quality rubrics. Upper Saddle River, NJ: Pearson/Merrill Prentice Hall. Stiggins, R.J. 2001. Student-involved classroom assessment. 3rd ed. Upper Saddle River, NJ: Prentice-Hall. Reddy, Y., & Andrade, H. (2010). A review of rubric use in higher education. Assessment & Evaluation In Higher Education, 35(4), 435-448.

Rubric Best Practices, Examples, and Templates

A rubric is a scoring tool that identifies the different criteria relevant to an assignment, assessment, or learning outcome and states the possible levels of achievement in a specific, clear, and objective way. Use rubrics to assess project-based student work including essays, group projects, creative endeavors, and oral presentations.

Rubrics can help instructors communicate expectations to students and assess student work fairly, consistently and efficiently. Rubrics can provide students with informative feedback on their strengths and weaknesses so that they can reflect on their performance and work on areas that need improvement.

How to Get Started

Best practices, moodle how-to guides.

  • Workshop Recording (Fall 2022)
  • Workshop Registration

Step 1: Analyze the assignment

The first step in the rubric creation process is to analyze the assignment or assessment for which you are creating a rubric. To do this, consider the following questions:

  • What is the purpose of the assignment and your feedback? What do you want students to demonstrate through the completion of this assignment (i.e. what are the learning objectives measured by it)? Is it a summative assessment, or will students use the feedback to create an improved product?
  • Does the assignment break down into different or smaller tasks? Are these tasks equally important as the main assignment?
  • What would an “excellent” assignment look like? An “acceptable” assignment? One that still needs major work?
  • How detailed do you want the feedback you give students to be? Do you want/need to give them a grade?

Step 2: Decide what kind of rubric you will use

Types of rubrics: holistic, analytic/descriptive, single-point

Holistic Rubric. A holistic rubric includes all the criteria (such as clarity, organization, mechanics, etc.) to be considered together and included in a single evaluation. With a holistic rubric, the rater or grader assigns a single score based on an overall judgment of the student’s work, using descriptions of each performance level to assign the score.

Advantages of holistic rubrics:

  • Can p lace an emphasis on what learners can demonstrate rather than what they cannot
  • Save grader time by minimizing the number of evaluations to be made for each student
  • Can be used consistently across raters, provided they have all been trained

Disadvantages of holistic rubrics:

  • Provide less specific feedback than analytic/descriptive rubrics
  • Can be difficult to choose a score when a student’s work is at varying levels across the criteria
  • Any weighting of c riteria cannot be indicated in the rubric

Analytic/Descriptive Rubric . An analytic or descriptive rubric often takes the form of a table with the criteria listed in the left column and with levels of performance listed across the top row. Each cell contains a description of what the specified criterion looks like at a given level of performance. Each of the criteria is scored individually.

Advantages of analytic rubrics:

  • Provide detailed feedback on areas of strength or weakness
  • Each criterion can be weighted to reflect its relative importance

Disadvantages of analytic rubrics:

  • More time-consuming to create and use than a holistic rubric
  • May not be used consistently across raters unless the cells are well defined
  • May result in giving less personalized feedback

Single-Point Rubric . A single-point rubric is breaks down the components of an assignment into different criteria, but instead of describing different levels of performance, only the “proficient” level is described. Feedback space is provided for instructors to give individualized comments to help students improve and/or show where they excelled beyond the proficiency descriptors.

Advantages of single-point rubrics:

  • Easier to create than an analytic/descriptive rubric
  • Perhaps more likely that students will read the descriptors
  • Areas of concern and excellence are open-ended
  • May removes a focus on the grade/points
  • May increase student creativity in project-based assignments

Disadvantage of analytic rubrics: Requires more work for instructors writing feedback

Step 3 (Optional): Look for templates and examples.

You might Google, “Rubric for persuasive essay at the college level” and see if there are any publicly available examples to start from. Ask your colleagues if they have used a rubric for a similar assignment. Some examples are also available at the end of this article. These rubrics can be a great starting point for you, but consider steps 3, 4, and 5 below to ensure that the rubric matches your assignment description, learning objectives and expectations.

Step 4: Define the assignment criteria

Make a list of the knowledge and skills are you measuring with the assignment/assessment Refer to your stated learning objectives, the assignment instructions, past examples of student work, etc. for help.

  Helpful strategies for defining grading criteria:

  • Collaborate with co-instructors, teaching assistants, and other colleagues
  • Brainstorm and discuss with students
  • Can they be observed and measured?
  • Are they important and essential?
  • Are they distinct from other criteria?
  • Are they phrased in precise, unambiguous language?
  • Revise the criteria as needed
  • Consider whether some are more important than others, and how you will weight them.

Step 5: Design the rating scale

Most ratings scales include between 3 and 5 levels. Consider the following questions when designing your rating scale:

  • Given what students are able to demonstrate in this assignment/assessment, what are the possible levels of achievement?
  • How many levels would you like to include (more levels means more detailed descriptions)
  • Will you use numbers and/or descriptive labels for each level of performance? (for example 5, 4, 3, 2, 1 and/or Exceeds expectations, Accomplished, Proficient, Developing, Beginning, etc.)
  • Don’t use too many columns, and recognize that some criteria can have more columns that others . The rubric needs to be comprehensible and organized. Pick the right amount of columns so that the criteria flow logically and naturally across levels.

Step 6: Write descriptions for each level of the rating scale

Artificial Intelligence tools like Chat GPT have proven to be useful tools for creating a rubric. You will want to engineer your prompt that you provide the AI assistant to ensure you get what you want. For example, you might provide the assignment description, the criteria you feel are important, and the number of levels of performance you want in your prompt. Use the results as a starting point, and adjust the descriptions as needed.

Building a rubric from scratch

For a single-point rubric , describe what would be considered “proficient,” i.e. B-level work, and provide that description. You might also include suggestions for students outside of the actual rubric about how they might surpass proficient-level work.

For analytic and holistic rubrics , c reate statements of expected performance at each level of the rubric.

  • Consider what descriptor is appropriate for each criteria, e.g., presence vs absence, complete vs incomplete, many vs none, major vs minor, consistent vs inconsistent, always vs never. If you have an indicator described in one level, it will need to be described in each level.
  • You might start with the top/exemplary level. What does it look like when a student has achieved excellence for each/every criterion? Then, look at the “bottom” level. What does it look like when a student has not achieved the learning goals in any way? Then, complete the in-between levels.
  • For an analytic rubric , do this for each particular criterion of the rubric so that every cell in the table is filled. These descriptions help students understand your expectations and their performance in regard to those expectations.

Well-written descriptions:

  • Describe observable and measurable behavior
  • Use parallel language across the scale
  • Indicate the degree to which the standards are met

Step 7: Create your rubric

Create your rubric in a table or spreadsheet in Word, Google Docs, Sheets, etc., and then transfer it by typing it into Moodle. You can also use online tools to create the rubric, but you will still have to type the criteria, indicators, levels, etc., into Moodle. Rubric creators: Rubistar , iRubric

Step 8: Pilot-test your rubric

Prior to implementing your rubric on a live course, obtain feedback from:

  • Teacher assistants

Try out your new rubric on a sample of student work. After you pilot-test your rubric, analyze the results to consider its effectiveness and revise accordingly.

  • Limit the rubric to a single page for reading and grading ease
  • Use parallel language . Use similar language and syntax/wording from column to column. Make sure that the rubric can be easily read from left to right or vice versa.
  • Use student-friendly language . Make sure the language is learning-level appropriate. If you use academic language or concepts, you will need to teach those concepts.
  • Share and discuss the rubric with your students . Students should understand that the rubric is there to help them learn, reflect, and self-assess. If students use a rubric, they will understand the expectations and their relevance to learning.
  • Consider scalability and reusability of rubrics. Create rubric templates that you can alter as needed for multiple assignments.
  • Maximize the descriptiveness of your language. Avoid words like “good” and “excellent.” For example, instead of saying, “uses excellent sources,” you might describe what makes a resource excellent so that students will know. You might also consider reducing the reliance on quantity, such as a number of allowable misspelled words. Focus instead, for example, on how distracting any spelling errors are.

Example of an analytic rubric for a final paper

Example of a holistic rubric for a final paper, single-point rubric, more examples:.

  • Single Point Rubric Template ( variation )
  • Analytic Rubric Template make a copy to edit
  • A Rubric for Rubrics
  • Bank of Online Discussion Rubrics in different formats
  • Mathematical Presentations Descriptive Rubric
  • Math Proof Assessment Rubric
  • Kansas State Sample Rubrics
  • Design Single Point Rubric

Technology Tools: Rubrics in Moodle

  • Moodle Docs: Rubrics
  • Moodle Docs: Grading Guide (use for single-point rubrics)

Tools with rubrics (other than Moodle)

  • Google Assignments
  • Turnitin Assignments: Rubric or Grading Form

Other resources

  • DePaul University (n.d.). Rubrics .
  • Gonzalez, J. (2014). Know your terms: Holistic, Analytic, and Single-Point Rubrics . Cult of Pedagogy.
  • Goodrich, H. (1996). Understanding rubrics . Teaching for Authentic Student Performance, 54 (4), 14-17. Retrieved from   
  • Miller, A. (2012). Tame the beast: tips for designing and using rubrics.
  • Ragupathi, K., Lee, A. (2020). Beyond Fairness and Consistency in Grading: The Role of Rubrics in Higher Education. In: Sanger, C., Gleason, N. (eds) Diversity and Inclusion in Global Higher Education. Palgrave Macmillan, Singapore.

Skip to Content

Other ways to search:

  • Events Calendar

Rubrics are a set of criteria to evaluate performance on an assignment or assessment. Rubrics can communicate expectations regarding the quality of work to students and provide a standardized framework for instructors to assess work. Rubrics can be used for both formative and summative assessment. They are also crucial in encouraging self-assessment of work and structuring peer-assessments. 

Why use rubrics?

Rubrics are an important tool to assess learning in an equitable and just manner. This is because they enable:

  • A common set of standards and criteria to be uniformly applied, which can mitigate bias
  • Transparency regarding the standards and criteria on which students are evaluated
  • Efficient grading with timely and actionable feedback 
  • Identifying areas in which students need additional support and guidance 
  • The use of objective, criterion-referenced metrics for evaluation 

Some instructors may be reluctant to provide a rubric to grade assessments under the perception that it stifles student creativity (Haugnes & Russell, 2018). However, sharing the purpose of an assessment and criteria for success in the form of a rubric along with relevant examples has been shown to particularly improve the success of BIPOC, multiracial, and first-generation students (Jonsson, 2014; Winkelmes, 2016). Improved success in assessments is generally associated with an increased sense of belonging which, in turn, leads to higher student retention and more equitable outcomes in the classroom (Calkins & Winkelmes, 2018; Weisz et al., 2023). By not providing a rubric, faculty may risk having students guess the criteria on which they will be evaluated. When students have to guess what expectations are, it may unfairly disadvantage students who are first-generation, BIPOC, international, or otherwise have not been exposed to the cultural norms that have dominated higher-ed institutions in the U.S (Shapiro et al., 2023). Moreover, in such cases, criteria may be applied inconsistently for students leading to biases in grades awarded to students.

Steps for Creating a Rubric

Clearly state the purpose of the assessment, which topic(s) learners are being tested on, the type of assessment (e.g., a presentation, essay, group project), the skills they are being tested on (e.g., writing, comprehension, presentation, collaboration), and the goal of the assessment for instructors (e.g., gauging formative or summative understanding of the topic). 

Determine the specific criteria or dimensions to assess in the assessment. These criteria should align with the learning objectives or outcomes to be evaluated. These criteria typically form the rows in a rubric grid and describe the skills, knowledge, or behavior to be demonstrated. The set of criteria may include, for example, the idea/content, quality of arguments, organization, grammar, citations and/or creativity in writing. These criteria may form separate rows or be compiled in a single row depending on the type of rubric.

(See row headers  of  Figure 1 )

Create a scale of performance levels that describe the degree of proficiency attained for each criterion. The scale typically has 4 to 5 levels (although there may be fewer levels depending on the type of rubrics used). The rubrics should also have meaningful labels (e.g., not meeting expectations, approaching expectations, meeting expectations, exceeding expectations). When assigning levels of performance, use inclusive language that can inculcate a growth mindset among students, especially when work may be otherwise deemed to not meet the mark. Some examples include, “Does not yet meet expectations,” “Considerable room for improvement,” “ Progressing,” “Approaching,” “Emerging,” “Needs more work,” instead of using terms like “Unacceptable,” “Fails,” “Poor,” or “Below Average.”

(See column headers  of  Figure 1 )

Develop a clear and concise descriptor for each combination of criterion and performance level. These descriptors should provide examples or explanations of what constitutes each level of performance for each criterion. Typically, instructors should start by describing the highest and lowest level of performance for that criterion and then describing intermediate performance for that criterion. It is important to keep the language uniform across all columns, e.g., use syntax and words that are aligned in each column for a given criteria. 

(See cells  of  Figure 1 )

It is important to consider how each criterion is weighted and for each criterion to reflect the importance of learning objectives being tested. For example, if the primary goal of a research proposal is to test mastery of content and application of knowledge, these criteria should be weighted more heavily compared to other criteria (e.g., grammar, style of presentation). This can be done by associating a different scoring system for each criteria (e.g., Following a scale of 8-6-4-2 points for each level of performance in higher weight criteria and 4-3-2-1 points for each level of performance for lower weight criteria). Further, the number of points awarded across levels of performance should be evenly spaced (e.g., 10-8-6-4 instead of 10-6-3-1). Finally, if there is a letter grade associated with a particular assessment, consider how it relates to scores. For example, instead of having students receive an A only if they received the highest level of performance on each criterion, consider assigning an A grade to a range of scores (28 - 30 total points) or a combination of levels of performance (e.g., exceeds expectations on higher weight criteria and meets expectations on other criteria). 

(See the numerical values in the column headers  of  Figure 1 )

 a close up of a score sheet

Figure 1:  Graphic describing the five basic elements of a rubric

Note : Consider using a template rubric that can be used to evaluate similar activities in the classroom to avoid the fatigue of developing multiple rubrics. Some tools include Rubistar or iRubric which provide suggested words for each criteria depending on the type of assessment. Additionally, the above format can be incorporated in rubrics that can be directly added in Canvas or in the grid view of rubrics in gradescope which are common grading tools. Alternately, tables within a Word processor or Spreadsheet may also be used to build a rubric. You may also adapt the example rubrics provided below to the specific learning goals for the assessment using the blank template rubrics we have provided against each type of rubric. Watch the linked video for a quick introduction to designing a rubric . Word document (docx) files linked below will automatically download to your device whereas pdf files will open in a new tab.

Types of Rubrics

In these rubrics, one specifies at least two criteria and provides a separate score for each criterion. The steps outlined above for creating a rubric are typical for an analytic style rubric. Analytic rubrics are used to provide detailed feedback to students and help identify strengths as well as particular areas in need of improvement. These can be particularly useful when providing formative feedback to students, for student peer assessment and self-assessments, or for project-based summative assessments that evaluate student learning across multiple criteria. You may use a blank analytic rubric template (docx) or adapt an existing sample of an analytic rubric (pdf) . 

figure 2

Fig 2: Graphic describing a sample analytic rubric (adopted from George Mason University, 2013)

These are a subset of analytical rubrics that are typically used to assess student performance and engagement during a learning period but not the end product. Such rubrics are typically used to assess soft skills and behaviors that are less tangible (e.g., intercultural maturity, empathy, collaboration skills). These rubrics are useful in assessing the extent to which students develop a particular skill, ability, or value in experiential learning based programs or skills. They are grounded in the theory of development (King, 2005). Examples include an intercultural knowledge and competence rubric (docx)  and a global learning rubric (docx) .

These rubrics consider all criteria evaluated on one scale, providing a single score that gives an overall impression of a student’s performance on an assessment.These rubrics also emphasize the overall quality of a student’s work, rather than delineating shortfalls of their work. However, a limitation of the holistic rubrics is that they are not useful for providing specific, nuanced feedback or to identify areas of improvement. Thus, they might be useful when grading summative assessments in which students have previously received detailed feedback using analytic or single-point rubrics. They may also be used to provide quick formative feedback for smaller assignments where not more than 2-3 criteria are being tested at once. Try using our blank holistic rubric template docx)  or adapt an existing sample of holistic rubric (pdf) . 

figure 3

Fig 3: Graphic describing a sample holistic rubric (adopted from Teaching Commons, DePaul University)

These rubrics contain only two levels of performance (e.g., yes/no, present/absent) across a longer list of criteria (beyond 5 levels). Checklist rubrics have the advantage of providing a quick assessment of criteria given the binary assessment of criteria that are either met or are not met. Consequently, they are preferable when initiating self- or  peer-assessments of learning given that it simplifies evaluations to be more objective and criteria can elicit only one of two responses allowing uniform and quick grading. For similar reasons, such rubrics are useful for faculty in providing quick formative feedback since it immediately highlights the specific criteria to improve on. Such rubrics are also used in grading summative assessments in courses utilizing alternative grading systems such as specifications grading, contract grading or a credit/no credit grading system wherein a minimum threshold of performance has to be met for the assessment. Having said that, developing rubrics from existing analytical rubrics may require considerable investment upfront given that criteria have to be phrased in a way that can only elicit binary responses. Here is a link to the checklist rubric template (docx) .

 Graphic describing a sample checklist rubric

Fig. 4: Graphic describing a sample checklist rubric

A single point rubric is a modified version of a checklist style rubric, in that it specifies a single column of criteria. However, rather than only indicating whether expectations are met or not, as happens in a checklist rubric, a single point rubric allows instructors to specify ways in which criteria exceeds or does not meet expectations. Here the criteria to be tested are laid out in a central column describing the average expectation for the assignment. Instructors indicate areas of improvement on the left side of the criteria, whereas areas of strength in student performance are indicated on the right side. These types of rubrics provide flexibility in scoring, and are typically used in courses with alternative grading systems such as ungrading or contract grading. However, they do require the instructors to provide detailed feedback for each student, which can be unfeasible for assessments in large classes. Here is a link to the single point rubric template (docx) .

Fig. 5 Graphic describing a single point rubric (adopted from Teaching Commons, DePaul University)

Fig. 5 Graphic describing a single point rubric (adopted from Teaching Commons, DePaul University)

Best Practices for Designing and Implementing Rubrics

When designing the rubric format, descriptors and criteria should be presented in a way that is compatible with screen readers and reading assistive technology. For example, avoid using only color, jargon, or complex terminology to convey information. In case you do use color, pictures or graphics, try providing alternative formats for rubrics, such as plain text documents. Explore resources from the CU Digital Accessibility Office to learn more.

Co-creating rubrics can help students to engage in higher-order thinking skills such as analysis and evaluation. Further, it allows students to take ownership of their own learning by determining the criteria of their work they aspire towards. For graduate classes or upper-level students, one way of doing this may be to provide learning outcomes of the project, and let students develop the rubric on their own. However, students in introductory classes may need more scaffolding by providing them a draft and leaving room for modification (Stevens & Levi 2013). Watch the linked video for tips on co-creating rubrics with students . Further, involving teaching assistants in designing a rubric can help in getting feedback on expectations for an assessment prior to implementing and norming a rubric. 

When first designing a rubric, it is important to compare grades awarded for the same assessment by multiple graders to make sure the criteria are applied uniformly and reliably for the same level of performance. Further, ensure that the levels of performance in student work can be adequately distinguished using a rubric. Such a norming protocol is particularly important to also do at the start of any course in which multiple graders use the same rubric to grade an assessment (e.g., recitation sections, lab sections, teaching team). Here, instructors may select a subset of assignments that all graders evaluate using the same rubric, followed by a discussion to identify any discrepancies in criteria applied and ways to address them. Such strategies can make the rubrics more reliable, effective, and clear.

Sharing the rubric with students prior to an assessment can help familiarize students with an instructor’s expectations. This can help students master their learning outcomes by guiding their work in the appropriate direction and increase student motivation. Further, providing the rubric to students can help encourage metacognition and ability to self-assess learning.

Sample Rubrics

Below are links to rubric templates designed by a team of experts assembled by the Association of American Colleges and Universities (AAC&U) to assess 16 major learning goals. These goals are a part of the Valid Assessment of Learning in Undergraduate Education (VALUE) program. All of these examples are analytic rubrics and have detailed criteria to test specific skills. However, since any given assessment typically tests multiple skills, instructors are encouraged to develop their own rubric by utilizing criteria picked from a combination of the rubrics linked below.

  • Civic knowledge and engagement-local and global
  • Creative thinking
  • Critical thinking
  • Ethical reasoning
  • Foundations and skills for lifelong learning
  • Information literacy
  • Integrative and applied learning
  • Intercultural knowledge and competence
  • Inquiry and analysis
  • Oral communication
  • Problem solving
  • Quantitative literacy
  • Written Communication

Note : Clicking on the above links will automatically download them to your device in Microsoft Word format. These links have been created and are hosted by Kansas State University . Additional information regarding the VALUE Rubrics may be found on the AAC&U homepage . 

Below are links to sample rubrics that have been developed for different types of assessments. These rubrics follow the analytical rubric template, unless mentioned otherwise. However, these rubrics can be modified into other types of rubrics (e.g., checklist, holistic or single point rubrics) based on the grading system and goal of assessment (e.g., formative or summative). As mentioned previously, these rubrics can be modified using the blank template provided.

  • Oral presentations  
  • Painting Portfolio (single-point rubric)
  • Research Paper
  • Video Storyboard

Additional information:

Office of Assessment and Curriculum Support. (n.d.). Creating and using rubrics . University of Hawai’i, Mānoa

Calkins, C., & Winkelmes, M. A. (2018). A teaching method that boosts UNLV student retention . UNLV Best Teaching Practices Expo , 3.

Fraile, J., Panadero, E., & Pardo, R. (2017). Co-creating rubrics: The effects on self-regulated learning, self-efficacy and performance of establishing assessment criteria with students. Studies In Educational Evaluation , 53, 69-76

Haugnes, N., & Russell, J. L. (2016). Don’t box me in: Rubrics for àrtists and Designers . To Improve the Academy , 35 (2), 249–283. 

Jonsson, A. (2014). Rubrics as a way of providing transparency in assessment , Assessment & Evaluation in Higher Education , 39(7), 840-852 

McCartin, L. (2022, February 1). Rubrics! an equity-minded practice . University of Northern Colorado

Shapiro, S., Farrelly, R., & Tomaš, Z. (2023). Chapter 4: Effective and Equitable Assignments and Assessments. Fostering International Student Success in higher education (pp, 61-87, second edition). TESOL Press.

Stevens, D. D., & Levi, A. J. (2013). Introduction to rubrics: An assessment tool to save grading time, convey effective feedback, and promote student learning (second edition). Sterling, VA: Stylus.

Teaching Commons (n.d.). Types of Rubrics . DePaul University

Teaching Resources (n.d.). Rubric best practices, examples, and templates . NC State University 

Winkelmes, M., Bernacki, M., Butler, J., Zochowski, M., Golanics, J., & Weavil, K.H. (2016). A teaching intervention that increases underserved college students’ success . Peer Review , 8(1/2), 31-36.

Weisz, C., Richard, D., Oleson, K., Winkelmes, M.A., Powley, C., Sadik, A., & Stone, B. (in progress, 2023). Transparency, confidence, belonging and skill development among 400 community college students in the state of Washington . 

Association of American Colleges and Universities. (2009). Valid Assessment of Learning in Undergraduate Education (VALUE) . 

Canvas Community. (2021, August 24). How do I add a rubric in a course? Canvas LMS Community.

 Center for Teaching & Learning. (2021, March 03). Overview of Rubrics . University of Colorado, Boulder

 Center for Teaching & Learning. (2021, March 18). Best practices to co-create rubrics with students . University of Colorado, Boulder.

Chase, D., Ferguson, J. L., & Hoey, J. J. (2014). Assessment in creative disciplines: Quantifying and qualifying the aesthetic . Common Ground Publishing.

Feldman, J. (2018). Grading for equity: What it is, why it matters, and how it can transform schools and classrooms . Corwin Press, CA.

Gradescope (n.d.). Instructor: Assignment - Grade Submissions . Gradescope Help Center. 

Henning, G., Baker, G., Jankowski, N., Lundquist, A., & Montenegro, E. (Eds.). (2022). Reframing assessment to center equity . Stylus Publishing. 

 King, P. M. & Baxter Magolda, M. B. (2005). A developmental model of intercultural maturity . Journal of College Student Development . 46(2), 571-592.

Selke, M. J. G. (2013). Rubric assessment goes to college: Objective, comprehensive evaluation of student work. Lanham, MD: Rowman & Littlefield.

The Institute for Habits of Mind. (2023, January 9). Creativity Rubrics - The Institute for Habits of Mind . 

  • Assessment in Large Enrollment Classes
  • Classroom Assessment Techniques
  • Creating and Using Learning Outcomes
  • Early Feedback
  • Five Misconceptions on Writing Feedback
  • Formative Assessments
  • Frequent Feedback
  • Online and Remote Exams
  • Student Learning Outcomes Assessment
  • Student Peer Assessment
  • Student Self-assessment
  • Summative Assessments: Best Practices
  • Summative Assessments: Types
  • Assessing & Reflecting on Teaching
  • Departmental Teaching Evaluation
  • Equity in Assessment
  • Glossary of Terms
  • Attendance Policies
  • Books We Recommend
  • Classroom Management
  • Community-Developed Resources
  • Compassion & Self-Compassion
  • Course Design & Development
  • Course-in-a-box for New CU Educators
  • Enthusiasm & Teaching
  • First Day Tips
  • Flexible Teaching
  • Grants & Awards
  • Inclusivity
  • Learner Motivation
  • Making Teaching & Learning Visible
  • National Center for Faculty Development & Diversity
  • Open Education
  • Student Support Toolkit
  • Sustainaiblity
  • TA/Instructor Agreement
  • Teaching & Learning in the Age of AI
  • Teaching Well with Technology

Eberly Center

Teaching excellence & educational innovation, creating and using rubrics.

A rubric is a scoring tool that explicitly describes the instructor’s performance expectations for an assignment or piece of work. A rubric identifies:

  • criteria: the aspects of performance (e.g., argument, evidence, clarity) that will be assessed
  • descriptors: the characteristics associated with each dimension (e.g., argument is demonstrable and original, evidence is diverse and compelling)
  • performance levels: a rating scale that identifies students’ level of mastery within each criterion  

Rubrics can be used to provide feedback to students on diverse types of assignments, from papers, projects, and oral presentations to artistic performances and group projects.

Benefitting from Rubrics

  • reduce the time spent grading by allowing instructors to refer to a substantive description without writing long comments
  • help instructors more clearly identify strengths and weaknesses across an entire class and adjust their instruction appropriately
  • help to ensure consistency across time and across graders
  • reduce the uncertainty which can accompany grading
  • discourage complaints about grades
  • understand instructors’ expectations and standards
  • use instructor feedback to improve their performance
  • monitor and assess their progress as they work towards clearly indicated goals
  • recognize their strengths and weaknesses and direct their efforts accordingly

Examples of Rubrics

Here we are providing a sample set of rubrics designed by faculty at Carnegie Mellon and other institutions. Although your particular field of study or type of assessment may not be represented, viewing a rubric that is designed for a similar assessment may give you ideas for the kinds of criteria, descriptions, and performance levels you use on your own rubric.

  • Example 1: Philosophy Paper This rubric was designed for student papers in a range of courses in philosophy (Carnegie Mellon).
  • Example 2: Psychology Assignment Short, concept application homework assignment in cognitive psychology (Carnegie Mellon).
  • Example 3: Anthropology Writing Assignments This rubric was designed for a series of short writing assignments in anthropology (Carnegie Mellon).
  • Example 4: History Research Paper . This rubric was designed for essays and research papers in history (Carnegie Mellon).
  • Example 1: Capstone Project in Design This rubric describes the components and standards of performance from the research phase to the final presentation for a senior capstone project in design (Carnegie Mellon).
  • Example 2: Engineering Design Project This rubric describes performance standards for three aspects of a team project: research and design, communication, and team work.

Oral Presentations

  • Example 1: Oral Exam This rubric describes a set of components and standards for assessing performance on an oral exam in an upper-division course in history (Carnegie Mellon).
  • Example 2: Oral Communication This rubric is adapted from Huba and Freed, 2000.
  • Example 3: Group Presentations This rubric describes a set of components and standards for assessing group presentations in history (Carnegie Mellon).

Class Participation/Contributions

  • Example 1: Discussion Class This rubric assesses the quality of student contributions to class discussions. This is appropriate for an undergraduate-level course (Carnegie Mellon).
  • Example 2: Advanced Seminar This rubric is designed for assessing discussion performance in an advanced undergraduate or graduate seminar.

See also " Examples and Tools " section of this site for more rubrics.

CONTACT US to talk with an Eberly colleague in person!

  • Faculty Support
  • Graduate Student Support
  • Canvas @ Carnegie Mellon
  • Quick Links

creative commons image

  • Majors & Minors
  • About Southwestern
  • Library & IT
  • Develop Your Career
  • Life at Southwestern
  • Scholarships/Financial Aid
  • Student Organizations
  • Study Abroad
  • Academic Advising
  • Billing & Payments
  • mySouthwestern
  • Pirate Card
  • Registrar & Records
  • Resources & Tools
  • Safety & Security
  • Student Life
  • For Current Parents
  • Parent Council
  • Parent Handbook/FAQ
  • Rankings & Recognition
  • Tactical Plan
  • Academic Affairs
  • Business Office
  • Facilities Management
  • Human Resources
  • Notable Achievements
  • Alumni Home
  • Alumni Achievement
  • Alumni Calendar
  • Alumni Directory
  • Class Years
  • Local Chapters
  • Make a Gift
  • SU Ambassadors

Southwestern University

Southwestern University announces its 2021–2026 Tactical Plan.

Katherine Hooker, Megan Firestone, and Todd White

Statewide organization recognizes Southwestern University’s Library staff with “Branding Iron Awards” for excellence in marketing and public relations.

Assistant Professor of Sociology Amanda Hernandez

A conversation with Assistant Professor of Sociology Amanda Hernandez.

Chef Daniel Miller II

Chef Daniel Miller II’s bold creation clinches Best Beefy Sandwich title, highlighting the university’s culinary excellence.

Southwestern University

America’s largest bookseller to bring a new storefront, expanded textbook and merchandise selections, First Day® Complete program, and more to Southwestern.

David A. Ortiz, Ph.D.

Ortiz brings over 30 years of experience to new position at Southwestern.

Abigail Bensman

Business major Abigail Bensman fulfills her Broadway dream while enrolled at Southwestern after being cast as “Brenda” in the North American tour of the Tony Award-winning musical Hairspray .

Southwestern University

Southwestern’s inclusivity efforts for the military community recognized with Silver Award.

Southwestern University Women's Annex

Learn more about Southwestern University’s Women’s Annex and view a fascinating film clip from over a century ago. 

Aidan Balakrishnan

Student bridges theater and technology to create an automatic spotlight tracker.

Southwestern Students on Mt. Kilimanjaro

Discover the thrill of exploration through the Outdoor Adventure Program.

Assistant Professor of Feminist Studies Meagan Solomon

A conversation with Assistant Professor of Feminist Studies Meagan Solomon.

Southwestern University

Georgetown’s prime location in the path of totality led to a breathtaking experience for all who attended the Total Eclipse at Southwestern event.

Jaime Hotaling

Jaime Hotaling ’23 recounts her transformative journey from SU to Disney.

Community Engaged Learning with Purl Elementary School

Follow along as local elementary school students enjoy a campus tour and conduct science experiments with the Chemistry Department. 

Adam Winkler ’04

In May, Southwestern University will welcome Adam Winkler ’04 as the commencement speaker for the graduation ceremony celebrating the class of 2024.

assignment assessment criteria

On April 8, 2024, the University invites all to experience an astronomical event.

assignment assessment criteria

Explore Southwestern’s Distinctive Collections and Archives trove of love letters throughout history.

Mason Biggers ’21

Alumnus reflects on his journey of resilience, creativity, and making an impact.

Designing Rubrics

Developing assessment criteria.

After you have decided which type of rubric you’ll use for your class, the next step is to decide which skills or knowledge you expect students to display in their papers.  Here are a few questions that may help develop criteria to assess student writing:

What is the purpose of the assignment? 

What skills or knowledge do you want students to display?  Returning to the “purpose” section of your prompt  may be helpful here. The Center for Teaching at Vanderbilt University suggests considering whether your assignment is designed for students to demonstrate the following:

  • Thoroughness
  • Demonstration of Knowledge
  • Critical Inquiry

You might also consider:

  • Demonstration of Research Skills
  • Awareness of Disciplinary Conventions
  • Synthesis of Information

How have you already discussed the assignment with students?

Revisiting the  writing guides  and the “criteria” section of your prompt can provide you with ready-made terminology for evaluation criteria. You might also check our list of precise language for describing writing tasks. 

Can you adapt an existing rubric? 

Our list of further resources and model rubrics  includes links to several rubrics that you may want to adapt for your assignment.

What, if any, are the criteria you will repeat on each rubric? 

You may choose to have a section of your rubric that you use for each assignment for the semester, so that students know that they will always be expected to, for example, use disciplinary conventions in their writing, or include a counterargument.  The capstone rubrics designed by the departments can be particularly helpful in choosing reusable criteria. 

How many criteria do you need? 

Most rubrics tend to identify somewhere between three and eight criteria for evaluation.

What’s the worst-case scenario? 

If you’re assigning value, it’s useful to think in terms of minimum requirements.  How important is each section of your rubric?  If you’re comfortable with a student making an A on a paper that includes multiple grammatical mistakes (and there may be assignments where this is the case) then five points out of a hundred might work fine.  If not, then maybe that section should be worth ten or fifteen points.

Do you want to involve your class? 

The authors of “On the ‘Uses’ of Rubrics” suggest including a wild-card slot in your rubric or involving your class in the creation of the rubric.  Spending fifteen or twenty minutes as a class discussing and establishing the important criteria for grading can certainly build student involvement in an assignment and help build a sense of fairness in evaluation.

“Creating and Using Rubrics.”    The Assessment Office.  The University of Hawaii at Mānoa .  18 December 2013.  Web. 1 June 2014.

“Creating Grading Criteria. ”   The Sheridan Center for Teaching and Learning .  Brown University. n.d. Web. 1 June 2014.

“Grading Student Work.”   Center for Teaching.  Vanderbilt University. n.d. Web. 1 June 2014.

Linder, Katherine.  “How to Develop a Rubric.”   Ohio State Writing Across the Curriculum Resources .  Ohio State University. 16 November 2011. Web. 1 June 2014.

“Matching Learning Goals to Assignment Types.”   Teaching Commons .  DePaul University. n.d. Web. 1 June 2014.

“Rubric Development.”    Center for University Teaching, Learning, and Assessment .  University of West Florida.  24 April 2014.  Web. 1 June 2014.

Tierny, Robin and Marielle Simon.   “What’s Still Wrong with Rubrics: Focusing on the consistency of performance criteria across scale levels.”    Practical Assessment, Research & Evaluation , 9(2).  Web. 1 June 2014.

Turley, Eric and Chris W. Gallagher.  “On the ‘Uses’ of Rubrics: Reframing the Great Rubric Debate.”   The English Journal  97.4 (2008): 87-92.

  • Student Voice
  • Strategic Priorities
  • Workshops & Training
  • Case Studies
  • Assessment Principles
  • Writing an Effective Assessment Brief
  • Writing Assessment Criteria and Rubrics
  • Formative Assessment
  • Options for Peer Assessment and Peer Review
  • Programme Focussed Assessment
  • What Makes Good Feedback
  • Feedback on Exams
  • Presentations and Video Assessment
  • Digital Assessment: Canvas and Turnitin
  • Take Home Exams
  • Choosing a Digital Assessment Tool
  • Designing Learning
  • Synchronous Online Teaching
  • Scheduling Synchronous Sessions
  • Recording Online Teaching
  • Zoom or Teams?
  • Digital Accessibility
  • Universal Design for Learning
  • Artificial Intelligence (AI)
  • 'In-session' Social Activities
  • Facilitated Social Engagement
  • Student-led Spaces
  • Reflective Practice
  • Learning Analytics
  • Recording Videos
  • Editing Videos
  • Sharing Videos
  • Audio and Podcasts
  • Captions and Transcripts
  • Hybrid Teaching
  • Large Group Teaching
  • Laboratory Teaching
  • Field Trips and Virtual Fieldwork
  • Peer Dialogue
  • Digital Polling
  • Copyright and IPR
  • Principal Fellowship of HEA
  • National Teaching Fellowship Scheme
  • Internal Support
  • External Resources
  • Collaborative Award for Teaching Excellence
  • Newcastle Educational Practice Scheme (NEPS)
  • Introduction to Learning and Teaching (ILTHE)
  • UKPSF Experiential Route
  • Evidencing Learning and Teaching Skills (ELTS)
  • Learning and Teaching Conference
  • Education Enhancement Fund
  • Vice-Chancellor's Education Excellence Awards
  • Digital Capabilities
  • Peer Mentoring
  • Personal Tutoring
  • Virtual Exchange
  • Information and Digital Literacy
  • Academic and Study Skills
  • Special Collections and Archives
  • Reading List Toolkit
  • Employability and Graduate Skills
  • Academic Skills Kit
  • Numeracy, Maths and Stats
  • New Courses (23-24)
  • Access Canvas
  • Canvas Baseline
  • Community Information
  • Help and Support
  • Canvas Quizzes
  • Canvas New Analytics
  • Ally for Canvas
  • Third Party Tool Integrations
  • Roles, Permissions and Access
  • Recording on Campus
  • Recording at Home
  • Edit & Share Recordings
  • Captions in ReCap
  • ReCap Live Broadcast
  • ReCap Enabled Venues
  • Events & Conferences
  • ReCap Booking Requests
  • Multiple Bookings
  • ReCap Update Information
  • PCap Updates
  • Create and manage reflections
  • Identify the skills you are developing
  • Communicate and collaborate
  • NU Reflect tools
  • Personal Tutoring & Support landing page
  • Data Explorer
  • Open Badges
  • Supported Software

assignment assessment criteria

  • Newcastle University
  • Learning and Teaching @ Newcastle
  • Effective Practice
  • Assessment and Feedback

Assessment Criteria and Assessment Rubrics

Assessment rubrics and assessment criteria are both tools used for establishing a clear understanding between staff and students about what is expected from assessed work, and how student knowledge is evaluated. Assessment rubrics and assessment criteria are often referred to interchangeably but it's important to understand they have slightly different purposes.

Assessment criteria are specific standards or guidelines that outline what is expected of a student in a particular assessment task. They are often used to set clear guidelines for what constitutes success in each assessment task but are generally not broken down into discrete levels of performance. They can, however, provide the foundations for creating assessment rubrics. 

An assessment rubric is a more detailed and structured tool that breaks down the assessment criteria into different levels or categories of performance, typically ranging from poor to excellent, on a percentage scale. They can make a markers life easier by ensuring consistency and objectivity in grading or evaluation, provide a clear framework for assessing performance, and reduce time spent on grading.

Additionally, assessment rubrics can help students to understand the aims and requirements of an assessment task and can support them in understanding the outcome of their assessed work i.e., how the markers reached their decision, positive areas of feedback and feedforward comments that allow students to recognise their future development needs. 

Top Tips for Writing Assessment Rubrics

  • Start by clarifying the purpose of the assessment and what specific skills or learning outcomes you want to evaluate. This will guide the development of your rubric.
  • Break down the assessment into its essential components or criteria . These should reflect the specific skills or knowledge you want to assess. Be clear and specific about what you're looking for.
  • Use straightforward and unambiguous language in your rubric. Avoid jargon or complex terminology that may confuse students or other assessors.
  • Create a clear and logical progression of performance levels, each criterion should provide detailed descriptions of what constitutes achievement at that level (check if your school has example assessment rubrics/criteria as a starting point).
  • Consider assigning different weightings to each criterion. Some of the rubric criterion will be more significant than others , g., theoretical application, criticality, and structure weightings will defer (this will be guided by the knowledge outcomes).
  • Check for parity across grade boundaries  Consider what a student would need to do to obtain each classification- be clear about the components which make up good performance, e.g., clear structure, argument, engagement with critical material. It is best to start at a pass, and work towards a first class. Criteria should concentrate on what the student has done, rather than what they have not done. 
  • Leave space on the rubric for assessors to provide feedback to students . This can be particularly helpful for offering constructive feedback on areas of improvement.
  • Best practice is to ask colleagues  to read your criteria and check them for clarity before using them.
  • Criteria may be revised from year-to-year  in response to feedback and your own experience of using it Rubrics can and should evolve to become more effective over time.

Example: UG Stage 1 Rubric

Example: PGT Rubric

Case Studies Database

There are lots of examples of how formative assessment is used here at Newcastle on our Case Studies Database.

Get in touch

  • Center for Innovative Teaching and Learning
  • Instructional Guide
  • Rubrics for Assessment

A rubric is an explicit set of criteria used for assessing a particular type of work or performance (TLT Group, n.d.) and provides more details than a single grade or mark. Rubrics, therefore, will help you grade more objectively.

Have your students ever asked, “Why did you grade me that way?” or stated, “You never told us that we would be graded on grammar!” As a grading tool, rubrics can address these and other issues related to assessment: they reduce grading time; they increase objectivity and reduce subjectivity; they convey timely feedback to students and they improve students’ ability to include required elements of an assignment (Stevens & Levi, 2005). Grading rubrics can be used to assess a range of activities in any subject area

Elements of a Rubric

Typically designed as a grid-type structure, a grading rubric includes criteria, levels of performance, scores, and descriptors which become unique assessment tools for any given assignment. The table below illustrates a simple grading rubric with each of the four elements for a history research paper. 

Criteria identify the trait, feature or dimension which is to be measured and include a definition and example to clarify the meaning of each trait being assessed. Each assignment or performance will determine the number of criteria to be scored. Criteria are derived from assignments, checklists, grading sheets or colleagues.

Examples of Criteria for a term paper rubric

  • Introduction
  • Arguments/analysis
  • Grammar and punctuation
  • Internal citations

Levels of performance

Levels of performance are often labeled as adjectives which describe the performance levels. Levels of performance determine the degree of performance which has been met and will provide for consistent and objective assessment and better feedback to students. These levels tell students what they are expected to do. Levels of performance can be used without descriptors but descriptors help in achieving objectivity. Words used for levels of performance could influence a student’s interpretation of performance level (such as superior, moderate, poor or above or below average).

Examples to describe levels of performance

  • Excellent, Good, Fair, Poor
  • Master, Apprentice, Beginner
  • Exemplary, Accomplished, Developing, Beginning, Undeveloped
  • Complete, Incomplete
Levels of performance determine the degree of performance which has been met and will provide for consistent and objective assessment and better feedback to students.

Scores make up the system of numbers or values used to rate each criterion and often are combined with levels of performance. Begin by asking how many points are needed to adequately describe the range of performance you expect to see in students’ work. Consider the range of possible performance level.

Example of scores for a rubric

1, 2, 3, 4, 5 or 2, 4, 6, 8

Descriptors

Descriptors are explicit descriptions of the performance and show how the score is derived and what is expected of the students. Descriptors spell out each level (gradation) of performance for each criterion and describe what performance at a particular level looks like. Descriptors describe how well students’ work is distinguished from the work of their peers and will help you to distinguish between each student’s work. Descriptors should be detailed enough to differentiate between the different level and increase the objectivity of the rater.

Descriptors...describe what performance at a particular level looks like.

Developing a Grading Rubric

First, consider using any of a number of existing rubrics available online. Many rubrics can be used “as is.” Or, you could modify a rubric by adding or deleting elements or combining others for one that will suit your needs. Finally, you could create a completely customized rubric using specifically designed rubric software or just by creating a table with the rubric elements. The following steps will help you develop a rubric no matter which option you choose.

  • Select a performance/assignment to be assessed. Begin with a performance or assignment which may be difficult to grade and where you want to reduce subjectivity. Is the performance/assignment an authentic task related to learning goals and/or objectives? Are students replicating meaningful tasks found in the real world? Are you encouraging students to problem solve and apply knowledge? Answer these questions as you begin to develop the criteria for your rubric.
Begin with a performance or assignment which may be difficult to grade and where you want to reduce subjectivity.
  • List criteria. Begin by brainstorming a list of all criteria, traits or dimensions associated task. Reduce the list by chunking similar criteria and eliminating others until you produce a range of appropriate criteria. A rubric designed for formative and diagnostic assessments might have more criteria than those rubrics rating summative performances (Dodge, 2001). Keep the list of criteria manageable and reasonable.
  • Write criteria descriptions. Keep criteria descriptions brief, understandable, and in a logical order for students to follow as they work on the task.
  • Determine level of performance adjectives.  Select words or phrases that will explain what performance looks like at each level, making sure they are discrete enough to show real differences. Levels of performance should match the related criterion.
  • Develop scores. The scores will determine the ranges of performance in numerical value. Make sure the values make sense in terms of the total points possible: What is the difference between getting 10 points versus 100 points versus 1,000 points? The best and worst performance scores are placed at the ends of the continuum and the other scores are placed appropriately in between. It is suggested to start with fewer levels and to distinguish between work that does not meet the criteria. Also, it is difficult to make fine distinctions using qualitative levels such as never, sometimes, usually or limited acceptance, proficient or NA, poor, fair, good, very good, excellent. How will you make the distinctions?
It is suggested to start with fewer [score] levels and to distinguish between work that does not meet the criteria.
  • Write the descriptors. As a student is judged to move up the performance continuum, previous level descriptions are considered achieved in subsequent description levels. Therefore, it is not necessary to include “beginning level” descriptors in the same box where new skills are introduced.
  • Evaluate the rubric. As with any instructional tool, evaluate the rubric each time it is used to ensure it matches instructional goals and objectives. Be sure students understand each criterion and how they can use the rubric to their advantage. Consider providing more details about each of the rubric’s areas to further clarify these sections to students. Pilot test new rubrics if possible, review the rubric with a colleague, and solicit students’ feedback for further refinements.

Types of Rubrics

Determining which type of rubric to use depends on what and how you plan to evaluate. There are several types of rubrics including holistic, analytical, general, and task-specific. Each of these will be described below.

All criteria are assessed as a single score. Holistic rubrics are good for evaluating overall performance on a task. Because only one score is given, holistic rubrics tend to be easier to score. However, holistic rubrics do not provide detailed information on student performance for each criterion; the levels of performance are treated as a whole.

  • “Use for simple tasks and performances such as reading fluency or response to an essay question . . .
  • Getting a quick snapshot of overall quality or achievement
  • Judging the impact of a product or performance” (Arter & McTighe, 2001, p 21)

Each criterion is assessed separately, using different descriptive ratings. Each criterion receives a separate score. Analytical rubrics take more time to score but provide more detailed feedback.

  • “Judging complex performances . . . involving several significant [criteria] . . .
  • Providing more specific information or feedback to students . . .” (Arter & McTighe, 2001, p 22)

A generic rubric contains criteria that are general across tasks and can be used for similar tasks or performances. Criteria are assessed separately, as in an analytical rubric.

  • “[Use] when students will not all be doing exactly the same task; when students have a choice as to what evidence will be chosen to show competence on a particular skill or product.
  • [Use] when instructors are trying to judge consistently in different course sections” (Arter & McTighe, 2001, p 30)

Task-specific

Assesses a specific task. Unique criteria are assessed separately. However, it may not be possible to account for each and every criterion involved in a particular task which could overlook a student’s unique solution (Arter & McTighe, 2001).

  • “It’s easier and faster to get consistent scoring
  • [Use] in large-scale and “high-stakes” contexts, such as state-level accountability assessments
  • [Use when] you want to know whether students know particular facts, equations, methods, or procedures” (Arter & McTighe, 2001, p 28) 

Grading rubrics are effective and efficient tools which allow for objective and consistent assessment of a range of performances, assignments, and activities. Rubrics can help clarify your expectations and will show students how to meet them, making students accountable for their performance in an easy-to-follow format. The feedback that students receive through a grading rubric can help them improve their performance on revised or subsequent work. Rubrics can help to rationalize grades when students ask about your method of assessment. Rubrics also allow for consistency in grading for those who team teach the same course, for TAs assigned to the task of grading, and serve as good documentation for accreditation purposes. Several online sources exist which can be used in the creation of customized grading rubrics; a few of these are listed below.

Arter, J., & McTighe, J. (2001). Scoring rubrics in the classroom: Using performance criteria for assessing and improving student performance. Thousand Oaks, CA: Corwin Press, Inc.

Stevens, D. D., & Levi, A. J. (2005). Introduction to rubrics: An assessment tool to save grading time, convey effective feedback, and promote student learning. Sterling, VA: Stylus.

The Teaching, Learning, and Technology Group (n.d.). Rubrics: Definition, tools, examples, references. http://www.tltgroup.org/resources/flashlight/rubrics.htm

Selected Resources

Dodge, B. (2001). Creating a rubric on a given task. http://webquest.sdsu.edu/rubrics/rubrics.html

Wilson, M. (2006). Rethinking rubrics in writing assessment. Portsmouth, NH: Heinemann.

Rubric Builders and Generators

eMints.org (2011). Rubric/scoring guide. http://www.emints.org/webquest/rubric.shtml

General Rubric Generator. http://www.teach-nology.com/web_tools/rubrics/general/

RubiStar (2008). Create rubrics for your project-based learning activities. http://rubistar.4teachers.org/index.php

Creative Commons License

Suggested citation

Northern Illinois University Center for Innovative Teaching and Learning. (2012). Rubrics for assessment. In Instructional guide for university faculty and teaching assistants. Retrieved from https://www.niu.edu/citl/resources/guides/instructional-guide

  • Active Learning Activities
  • Assessing Student Learning
  • Direct vs. Indirect Assessment
  • Examples of Classroom Assessment Techniques
  • Formative and Summative Assessment
  • Peer and Self-Assessment
  • Reflective Journals and Learning Logs
  • The Process of Grading

Phone: 815-753-0595 Email: [email protected]

Connect with us on

Facebook page Twitter page YouTube page Instagram page LinkedIn page

Ohio State nav bar

The Ohio State University

  • BuckeyeLink
  • Find People
  • Search Ohio State

Designing Assessments of Student Learning

Image Hollie Nyseth Brehm, ​​​​​Associate Professor, Department of Sociology  Professor Hollie Nyseth Brehm was a graduate student the first time she taught a class, “I didn’t have any training on how to teach, so I assigned a final paper and gave them instructions: ‘Turn it in at the end of course.’ That was sort of it.” Brehm didn’t have a rubric or a process to check in with students along the way. Needless to say, the assignment didn’t lead to any major breakthroughs for her students. But it was a learning experience for Brehm. As she grew her teaching skills, she began to carefully craft assignments to align to course goals, make tasks realistic and meaningful, and break down large assignments into manageable steps. "Now I always have rubrics. … I always scaffold the assignment such that they’ll start by giving me their paper topic and a couple of sources and then turn in a smaller portion of it, and we write it in pieces. And that leads to a much better learning experience for them—and also for me, frankly, when I turn to grade it .”

Reflect  

Have you ever planned a big assignment that didn’t turn out as you’d hoped? What did you learn, and how would you design that assignment differently now? 

What are students learning in your class? Are they meeting your learning outcomes? You simply cannot answer these questions without assessment of some kind.

As educators, we measure student learning through many means, including assignments, quizzes, and tests. These assessments can be formal or informal, graded or ungraded. But assessment is not simply about awarding points and assigning grades. Learning is a process, not a product, and that process takes place during activities such as recall and practice. Assessing skills in varied ways helps you adjust your teaching throughout your course to support student learning

Instructor speaking to student on their laptop

Research tells us that our methods of assessment don’t only measure how much students have learned. They also play an important role in the learning process. A phenomenon known as the “testing effect” suggests students learn more from repeated testing than from repeated exposure to the material they are trying to learn (Karpicke & Roediger, 2008). While exposure to material, such as during lecture or study, helps students store new information, it’s crucial that students actively practice retrieving that information and putting it to use. Frequent assessment throughout a course provides students with the practice opportunities that are essential to learning.

In addition we can’t assume students can transfer what they have practiced in one context to a different context. Successful transfer of learning requires understanding of deep, structural features and patterns that novices to a subject are still developing (Barnett & Ceci, 2002; Bransford & Schwartz, 1999). If we want students to be able to apply their learning in a wide variety of contexts, they must practice what they’re learning in a wide variety of contexts .

Providing a variety of assessment types gives students multiple opportunities to practice and demonstrate learning. One way to categorize the range of assessment options is as formative or summative.

Formative and Summative Assessment

Opportunities not simply to practice, but to receive feedback on that practice, are crucial to learning (Ambrose et al., 2010). Formative assessment facilitates student learning by providing frequent low-stakes practice coupled with immediate and focused feedback. Whether graded or ungraded, formative assessment helps you monitor student progress and guide students to understand which outcomes they’ve mastered, which they need to focus on, and what strategies can support their learning. Formative assessment also informs how you modify your teaching to better meet student needs throughout your course.

Technology Tip

Design quizzes in CarmenCanvas to provide immediate and useful feedback to students based on their answers. Learn more about setting up quizzes in Carmen. 

Summative assessment measures student learning by comparing it to a standard. Usually these types of assessments evaluate a range of skills or overall performance at the end of a unit, module, or course. Unlike formative assessment, they tend to focus more on product than process. These high-stakes experiences are typically graded and should be less frequent (Ambrose et al., 2010).

Using Bloom's Taxonomy

A visual depiction of the Bloom's Taxonomy categories positioned like the layers of a cake. [row 1, at bottom] Remember; Recognizing and recalling facts. [Row 2] Understand: Understanding what the facts mean. [Row 3] Apply: Applying the facts, rules, concepts, and ideas. [Row 4] Analyze: Breaking down information into component parts. [Row 5] Evaluate: Judging the value of information or ideas. [Row 6, at top] Create: Combining parts to make a new whole.

Bloom’s Taxonomy is a common framework for thinking about how students can demonstrate their learning on assessments, as well as for articulating course and lesson learning outcomes .

Benjamin Bloom (alongside collaborators Max Englehart, Edward Furst, Walter Hill, and David Krathwohl) published Taxonomy of Educational Objectives in 1956.   The taxonomy provided a system for categorizing educational goals with the intent of aiding educators with assessment. Commonly known as Bloom’s Taxonomy, the framework has been widely used to guide and define instruction in both K-12 and university settings. The original taxonomy from 1956 included a cognitive domain made up of six categories: Knowledge, Comprehension, Application, Analysis, Synthesis, and Evaluation. The categories after Knowledge were presented as “skills and abilities,” with the understanding that knowledge was the necessary precondition for putting these skills and abilities into practice. 

A revised Bloom's Taxonomy from 2001 updated these six categories to reflect how learners interact with knowledge. In the revised version, students can:  Remember content, Understand ideas, Apply information to new situations, Analyze relationships between ideas, Evaluate information to justify perspectives or decisions, and Create new ideas or original work. In the graphic pictured here, the categories from the revised taxonomy are imagined as the layers of a cake.

Assessing students on a variety of Bloom's categories will give you a better sense of how well they understand your course content. The taxonomy can be a helpful guide to predicting which tasks will be most difficult for students so you can provide extra support where it is needed. It can also be used to craft more transparent assignments and test questions by honing in on the specific skills you want to assess and finding the right language to communicate exactly what you want students to do.  See the Sample Bloom's Verbs in the Examples section below.

Diving deeper into Bloom's Taxonomy

Like most aspects of our lives, activities and assessments in today’s classroom are inextricably linked with technology. In 2008, Andrew Churches extended Bloom’s Taxonomy to address the emerging changes in learning behaviors and opportunities as “technology advances and becomes more ubiquitous.” Consult Bloom’s Digital Taxonomy for ideas on using digital tools to facilitate and assess learning across the six categories of learning.

Did you know that the cognitive domain (commonly referred to simply as Bloom's Taxonomy) was only one of three domains in the original Bloom's Taxonomy (1956)? While it is certainly the most well-known and widely used, the other two domains— psychomotor and affective —may be of interest to some educators. The psychomotor domain relates to physical movement, coordination, and motor skills—it might apply to the performing arts or other courses that involve movement, manipulation of objects, and non-discursive communication like body language. The affective domain pertains to feelings, values, motivations, and attitudes and is used more often in disciplines like medicine, social work, and education, where emotions and values are integral aspects of learning. Explore the full taxonomy in  Three Domains of Learning: Cognitive, Affective, and Psychomotor (Hoque, 2017).

In Practice

Consider the following to make your assessments of student learning effective and meaningful.

Align assignments, quizzes, and tests closely to learning outcomes.

It goes without saying that you want students to achieve the learning outcomes for your course. The testing effect implies, then, that your assessments must help them retrieve the knowledge and practice the skills that are relevant to those outcomes.

Plan assessments that measure specific outcomes for your course. Instead of choosing quizzes and tests that are easy to grade or assignment types common to your discipline, carefully consider what assessments will best help students practice important skills. When assignments and feedback are aligned to learning outcomes, and you share this alignment with students, they have a greater appreciation for your course and develop more effective strategies for study and practice targeted at achieving those outcomes (Wang, et al., 2013).

Student working in a lab.

Provide authentic learning experiences.

Consider how far removed from “the real world” traditional assessments like academic essays, standard textbook problems, and multiple-choice exams feel to students. In contrast, assignments that are authentic resemble real-world tasks. They feel relevant and purposeful, which can increase student motivation and engagement (Fink, 2013). Authentic assignments also help you assess whether students will be able to transfer what they learn into realistic contexts beyond your course.

Integrate assessment opportunities that prepare students to be effective and successful once they graduate, whether as professionals, as global citizens, or in their personal lives.

To design authentic assignments:

  • Choose real-world content . If you want students to be able to apply disciplinary methods, frameworks, and terminology to solve real-world problems after your course, you must have them engage with real-world examples, procedures, and tools during your course. Include actual case studies, documents, data sets, and problems from your field in your assessments.
  • Target a real-world audience . Ask students to direct their work to a tangible reader, listener or viewer, rather than to you. For example, they could write a blog for their peers or create a presentation for a future employer.
  • Use real-world formats . Have students develop content in formats used in professional or real-life discourse. For example, instead of a conventional paper, students could write an email to a colleague or a letter to a government official, develop a project proposal or product pitch for a community-based company, post a how-to video on YouTube, or create an infographic to share on social media.

Simulations, role plays, case studies, portfolios, project-based learning, and service learning are all great avenues to bring authentic assessment into your course.

Make sure assignments are achievable.

Your students juggle coursework from several classes, so it’s important to be conscious of workload. Assign tasks they can realistically handle at a given point in the term. If it takes you three hours to do something, it will likely take your students six hours or more. Choose assignments that assess multiple learning outcomes from your course to keep your grading manageable and your feedback useful (Rayner et al., 2016).

Scaffold assignments so students can develop knowledge and skills over time.

For large assignments, use scaffolding to integrate multiple opportunities for feedback, reflection, and improvement. Scaffolding means breaking a complex assignment down into component parts or smaller progressive tasks over time. Practicing these smaller tasks individually before attempting to integrate them into a completed assignment supports student learning by reducing the amount of information they need to process at a given time (Salden et al., 2006).

Scaffolding ensures students will start earlier and spend more time on big assignments. And it provides you more opportunities to give feedback and guidance to support their ultimate success. Additionally, scaffolding can draw students’ attention to important steps in a process that are often overlooked, such as planning and revision, leading them to be more independent and thoughtful about future work.

A familiar example of scaffolding is a research paper. You might ask students to submit a topic or thesis in Week 3 of the semester, an annotated bibliography of sources in Week 6, a detailed outline in Week 9, a first draft on which they can get peer feedback in Week 11, and the final draft in the last week of the semester.

Your course journey is decided in part by how you sequence assignments. Consider where students are in their learning and place assignments at strategic points throughout the term. Scaffold across the course journey by explaining how each assignment builds upon the learning achieved in previous ones (Walvoord & Anderson, 2011). 

Be transparent about assignment instructions and expectations. 

Communicate clearly to students about the purpose of each assignment, the process for completing the task, and the criteria you will use to evaluate it before they begin the work. Studies have shown that transparent assignments support students to meet learning goals and result in especially large increases in success and confidence for underserved students (Winkelmes et al., 2016).

To increase assignment transparency:

Instructor giving directions to a class.

  • Explain how the assignment links to one or more course learning outcomes . Understanding why the assignment matters and how it supports their learning can increase student motivation and investment in the work.
  • Outline steps of the task in the assignment prompt . Clear directions help students structure their time and effort. This is also a chance to call out disciplinary standards with which students are not yet familiar or guide them to focus on steps of the process they often neglect, such as initial research.
  • Provide a rubric with straightforward evaluation criteria . Rubrics make transparent which parts of an assignment you care most about. Sharing clear criteria sets students up for success by giving them the tools to self-evaluate and revise their work before submitting it. Be sure to explain your rubric, and particularly to unpack new or vague terms; for example, language like "argue," “close reading,” "list significant findings," and "document" can mean different things in different disciplines. It is helpful to show exemplars and non-exemplars along with your rubric to highlight differences in unacceptable, acceptable, and exceptional work.

Engage students in reflection or discussion to increase assignment transparency. Have them consider how the assessed outcomes connect to their personal lives or future careers. In-class activities that ask them to grade sample assignments and discuss the criteria they used, compare exemplars and non-exemplars, engage in self- or peer-evaluation, or complete steps of the assignment when you are present to give feedback can all support student success.

Technology Tip   

Enter all  assignments and due dates  in your Carmen course to increase transparency. When assignments are entered in Carmen, they also populate to Calendar, Syllabus, and Grades areas so students can easily track their upcoming work. Carmen also allows you to  develop rubrics  for every assignment in your course. 

Sample Bloom’s Verbs

Building a question bank, using the transparent assignment template, sample assignment: ai-generated lesson plan.

Include frequent low-stakes assignments and assessments throughout your course to provide the opportunities for practice and feedback that are essential to learning. Consider a variety of formative and summative assessment types so students can demonstrate learning in multiple ways. Use Bloom’s Taxonomy to determine—and communicate—the specific skills you want to assess.

Remember that effective assessments of student learning are:

  • Aligned to course learning outcomes
  • Authentic, or resembling real-world tasks
  • Achievable and realistic
  • Scaffolded so students can develop knowledge and skills over time
  • Transparent in purpose, tasks, and criteria for evaluation
  • Collaborative learning techniques: A handbook for college faculty (book)
  • Cheating Lessons (book)
  • Minds online: Teaching effectively with technology (book)
  • Assessment: The Silent Killer of Learning (video)
  • TILT Higher Ed Examples and Resource (website)
  • Writing to Learn: Critical Thinking Activities for Any Classroom (guide)

Ambrose, S.A., Bridges, M.W., Lovett, M.C., DiPietro, M., & Norman, M.K. (2010).  How learning works: Seven research-based principles for smart teaching . John Wiley & Sons. 

Barnett, S.M., & Ceci, S.J. (2002). When and where do we apply what we learn? A taxonomy for far transfer.  Psychological Bulletin , 128 (4). 612–637.  doi.org/10.1037/0033-2909.128.4.612  

Bransford, J.D, & Schwartz, D.L. (1999). Rethinking transfer: A simple proposal with multiple implications.  Review of Research in Education , 24 . 61–100.  doi.org/10.3102/0091732X024001061  

Fink, L. D. (2013).  Creating significant learning experiences: An integrated approach to designing college courses . John Wiley & Sons. 

Karpicke, J.D., & Roediger, H.L., III. (2008). The critical importance of retrieval for learning.  Science ,  319 . 966–968.  doi.org/10.1126/science.1152408  

Rayner, K., Schotter, E. R., Masson, M. E., Potter, M. C., & Treiman, R. (2016). So much to read, so little time: How do we read, and can speed reading help?.  Psychological Science in the Public Interest ,  17 (1), 4-34.  doi.org/10.1177/1529100615623267     

Salden, R.J.C.M., Paas, F., van Merriënboer, J.J.G. (2006). A comparison of approaches to learning task selection in the training of complex cognitive skills.  Computers in Human Behavior , 22 (3). 321–333.  doi.org/10.1016/j.chb.2004.06.003  

Walvoord, B. E., & Anderson, V. J. (2010).  Effective grading: A tool for learning and assessment in college . John Wiley & Sons. 

Wang, X., Su, Y., Cheung, S., Wong, E., & Kwong, T. (2013). An exploration of Biggs’ constructive alignment in course design and its impact on students’ learning approaches.  Assessment & Evaluation in Higher Education , 38 (4). 477–491.  doi.org/10.1016/j.chb.2004.06.003  

Winkelmes, M., Bernacki, M., Butler, J., Zochowski, M., Golanics, J., & Weavil, K.H. (2016). A teaching intervention that increases underserved college students’ success.  Peer Review , 18 (1/2). 31–36. Retrieved from  https://www.aacu.org/peerreview/2016/winter-spring/Winkelmes

Related Teaching Topics

A positive approach to academic integrity, creating and adapting assignments for online courses, ai teaching strategies: transparent assignment design, designing research or inquiry-based assignments, using backward design to plan your course, universal design for learning: planning with all students in mind, search for resources.

University of California, Merced logo

Schedule a Consultation

UC Merced Instructor Starter Guide  

Syllabus Guide

All Pedagogy Guides

Preparing Your Course

Supporting Wellness

Learning Communities

Professional Opportunities

Back to Instructor Hub

Aligning assessment and measuring through rubrics, aligning assessment and learning activities, task analysis and assignment planning, rubrics clarifying performance expectations and grading criteria, designing rubric, diversity, equity, and inclusion course design rubric, aligning assessment and learning activities .

From the student's perspective, what we assess signals what we think is important and what we want students to learn; it defines the actual curriculum.  For example, 

If our course assessment plan consists primarily of multiple-choice tests requiring recall of information, this tells students to focus on learning specific information  (i.e., shallow learning).  If our assessment plan requires students to work through cases in which they propose not only solutions but also their rationales, this sends a message that deep learning (i.e., critical thinking) is important (Saroyan & Amundsen, 2004, 97).

How students perform on an exam or final performance is as much a reflection of what they were asked to do during the learning process as it is about the design of the assessment itself. For example, students often take notes during a live or recorded lecture, complete recall-oriented assignments and/or performances, participate in discussions, and complete multiple-choice quizzes or iClicker questions to master the learning outcome and prepare for the upcoming exam.  But unless these multiple-choice items assess conceptual understandings or require the application of new information, student learning will be shallow and focus on memorization for recognition purposes (Ambrose, et al., 2010; Bransford, Brown, & Cocking, 2000).  Too often, the type of questions posed to students on quizzes, polling exercises, or practice assignments does not prepare students for exam questions that require them to solve authentic, real-world problems, evaluate policies, or produce creative works.

For any type of classroom assessment measure to be valid, it must measure learning that is stated in the course learning outcomes (CLOs). And from a design perspective, learning outcomes and assessments drive the selection of planned learning experiences, including content delivery, practice assignments, quizzes, discussions, etc. (Wiggins & McTighe, 2005).

When a course is designed coherently, the planned learning experiences "provide students with sufficient practice and feedback to enable them to accomplish the desired learning and be prepared for evaluation activities" (Saroyan & Amundsen, 2004, 98).

assignment assessment criteria

Take an honest look at course alignment. Ask yourself the following:

Are the CLOS clear, observable, and measurable statements of expected knowledge, skills, and dispositions?

Do CLOs reflect core concepts and competencies for the level of this course (i.e., intro., lower or upper-division, graduate-level)? 

Do you have a variety of formal and informal assessments, with more frequent formative assessments to monitor and direct student learning?

Taken together, are the assessments a solid reflection of the CLOs/expectations for students?

Do the planned learning experiences (LEs) for your course (i.e., content delivery, practice exercises, discussions, group work, assignments, etc.) adequately prepare students for the type of summative assessment utilized in each unit of study.

Do your assessments target lower- or higher-order thinking skills?

Are there ways you can improve the alignment of the LEs and assessments?

References:

Ambrose, S. A., Bridges, M. W., DiPietro, M., Lovett, M. C., & Norman, M. K. (2010). How learning works: 7 research-based principles for smart teaching. San Francisco, CA: Jossey-Bass.

Bransford, J.D., Brown, A.L., & Cocking, R.R. (2000 Eds.). How people learn: Brain, mind, experience, and school. Washington, DC: National Academies Press; 1st edition.

Saroyan & Amundsen, (2004). Rethinking teaching in higher education: From a course design workshop to a faculty development framework. Sterling, VA: Stylus Pub.

Wiggins, G., & McTighe, J. (2005). Understanding by design (2nd Ed.). Alexandria, VA: Association for Supervision and Curriculum Development.

Student success begins with a well-planned assignment that aligns at least one of the course outcomes and clearly articulates what you want students to be able demonstrate that they know (i.e., core concepts and/or competencies). Recent research has shown that providing students with a clear and transparent picture of the assignment (e.g., previous student work samples, a grading rubric) before they start the assignment can lead to better performance (Winkelmes, et al., 2016) .

Consider making task analysis * a graded part of the assignment. Using the following handout, have students articulate in writing

  • the purpose and goal of the assignment,
  • how this task fits with previous course readings, lectures, activities,
  • what steps they need to take to successfully complete the assignment, and
  • exactly how they will accomplish those steps.

Finally, taking a few minutes of class time to remind students what they should be doing for their assignments can pay off down the road Gooblar, 2019).

* Task analysis is the process of breaking a task or skill down into smaller, more manageable steps or actions.

  • Reread the assignment prompt.
  • What are the goals of this assignment? What will you achieve or understand if you complete the assignment successfully? What will you need to demonstrate to satisfy the assignment requirements?
  • Write down the specific steps you will need to take to successfully complete the assignment.
  • Go through the list of steps and note next to each the time you think it will take to complete the step.
  • Given the assignment due date, make yourself a plan: a schedule for completing each step in the assignment. Try to be realistic, given your other commitments, academic or personal.
  • Share this plan with at least one peer to get feedback (optional).
  • Gooblar, D. (2019). The Missing Course: Everything They Never Taught You About College Teaching.  Cambridge, MA.: Harvard University Press.
  • Winkelmes, M. et al. (2016). A teaching intervention that increases underserved college student success.  Peer Review, 18 (1/2), 31-36.

Have you ever heard any of the following from your students?

Why did you give me this grade?

You didn't even lecture about that topic!

You never told us that we would be graded on spelling or grammar!

How were we to know you wanted X?

Susie and I studied together, yet you gave her an A and I got a C. Why?

For most faculty, grading (sometimes referred to as marking) is one of the most undesirable aspects of the job. To promote student growth and development, grading requires providing students with frequent and instructive feedback, both of which can be time-consuming and present other challenges.  According to Saroyan & Amundsen (2004), most instructors are concerned with how to

manage assessment and grading in large classes

grade group work and class participation

increase reliability among multiple graders (TAs)

increase objectivity when marking

handle grading disputes (95).

While these are all important issues related to grading, we want to focus your attention on one specific grading technique - the development and use of rubrics, which have the potential to mitigate many of the concerns noted above.  Additionally, grading rubrics can be used to assess a range of activities in any subject area.

Reflect on the following questions and resources:

What is a grading rubric?

​A rubric is an explicit set of criteria used for assessing a particular type of work or performance, and it provides more details than a single grade or mark. Rubrics, therefore, will help you grade more objectively and efficiently (Stevens & Levi, 2005).

What are the benefits of using a grading rubric?

Rubrics are useful for instructors and students alike:

What are the elements of a rubric?

Most commonly, a grading rubric is designed in a matrix or grid-type structure and includes grading criteria, levels of performance, the range of possible scores, and descriptors that serve as unique assessment tools for a specific assignment/assessment.  There are different types of rubrics (e.g., checklists, rating scales, and analytic), each serving a different purpose (see Figure 1).  The most common, however, is the analytic rubric (see Figure 2).

assignment assessment criteria

Designing a Rubric

Identify Criteria identify the specific features or dimensions which will be measured and include a definition and example to clarify the meaning of each feature being assessed. The number of criteria to be scored will vary by each assignment or performance.  For example, criteria for a term paper rubric may include:

Introduction

Arguments/analysis

Writing conventions: Grammar, punctuation, spelling

Internal citations

Levels of Performance clarify the degree of performance and are often labeled as adjectives that describe the performance levels. These levels tell students what they are expected to do to achieve a particular score/grade.  adjectives used for levels of performance may influence a student's interpretation of performance level. Some examples are:

Excellent, Good, Fair, Poor

Master, Apprentice, Beginner

Exemplary, Accomplished, Developing, Beginning, Undeveloped

Complete, Incomplete

Scores represent the values used to rate each criterion and often are combined with levels of performance. You will want to begin by asking yourself how many points are needed to adequately describe the range of performance you expect to see in students’ work. Consider the range of possible performance level (e.g., 1, 2, 3, 4, 5 or 2, 4, 6, 8 or a range 12-15, 9-11, 6-9, etc.)

Descriptors are explicit descriptions of the performance and show what is expected of the students and how scores are derived. Descriptors spell out each level of performance for each criterion and describe what performance at a particular level looks like. Descriptors should be detailed enough to differentiate between the different levels and increase the objectivity of the rater.

What does the final grade in your course represent? In other words, how would you describe the difference between the student who leaves your course with an "A" and the student who leaves the course with a "D?"  How might you help students understand performance differences and set themselves up for success in your course?

Identify at least one assessment in which a grading rubric makes sense. Create a grading rubric for this assignment. (Make sure to give this to the students before they complete the assignment.)

Stevens, D. D., & Levi, A. J. (2005). Introduction to rubrics: An assessment tool to save grading time, convey effective feedback, and promote student learning. Sterling, VA: Stylus.

Additional Links

  • Executive Leadership
  • University Library
  • School of Engineering
  • School of Natural Sciences
  • School of Social Sciences, Humanities & Arts
  • Ernest & Julio Gallo Management Program
  • Division of Graduate Education
  • Division of Undergraduate Education

Administration

  • Office of the Chancellor
  • Office of Executive Vice Chancellor and Provost
  • Equity, Justice and Inclusive Excellence
  • External Relations
  • Finance & Administration
  • Physical Operations, Planning and Development
  • Student Affairs
  • Research and Economic Development
  • Office of Information Technology

University of California, Merced 5200 North Lake Rd. Merced, CA 95343 Telephone: (209) 228-4400

Twitter icon

  • © 2024
  • About UC Merced
  • Privacy/Legal
  • Site Feedback
  • Accessibility

University of York Library

  • Subject Guides

Academic writing: a practical guide

Assessment & feedback.

  • Academic writing
  • The writing process
  • Academic writing style
  • Structure & cohesion
  • Criticality in academic writing
  • Working with evidence
  • Referencing
  • Dissertations
  • Reflective writing
  • Examination writing
  • Academic posters
  • Feedback on Structure and Organisation
  • Feedback on Argument, Analysis, and Critical Thinking
  • Feedback on Writing Style and Clarity
  • Feedback on Referencing and Research
  • Feedback on Presentation and Proofreading

What markers are looking for in your work and using feedback to improve your writing.

Assessment criteria

Most written assignments are marked using assessment criteria, which show the areas the work is marked on and what's expected at each grade band. Each department has its own assessment criteria, and sometimes modules have a specific set of criteria too. 

Assessment criteria can be organised as a list of descriptions for each grade band, as a grid with descriptions for each marking area and grade band, or as marking areas rated on a scale.

basic illustration of different criteria formats

Marking areas

Criteria have different areas (or categories) that work is marked against. Marking areas can vary across departments or assignments, but there are some key areas that often appear. Here are some ideas of what markers may be looking for:

Content & relevance

This area looks at how far the assignment meets the task requirements and the relevance of the information included. 

Addressing the question/task

Make sure you read the assessment instructions carefully!

  • How far does the work address the question/task?
  • Are all parts of the question/task addressed?
  • Is the work within +/- 10% of the stated word count?

Relevance of content

  • Are points and sources relevant to the question/task?
  • Are the relevant ideas/principles from the module included?
  • For higher scores, does the work include relevant ideas from reading beyond the module content?
  • Are sources used relevant, up to date and reliable?

More information:

assignment assessment criteria

Argument & analysis

This category focuses on the quality of your argument and critical analysis relating to the topic. It's also sometimes called 'originality of thought' or 'engagement with ideas'.

An important aspect of academic writing is incorporating relevant previous research and thinking as well as your own findings. However, it's not enough just to describe or summarise this information - to get higher marks, you also need to critically analyse it. What does it mean in terms of your overall argument?  If the content is the 'what', the analysis and argument are more like 'so what?'.

Relating this to  Bloom's Taxonomy , this requires more complex processes; analysing, evaluating and creating.

Questions markers may ask:

  • Is the argument logical and well developed, with enough relevant supporting evidence?
  • Is the work analytical, rather than just descriptive?
  • Have connections or comparisons been made across sources?
  • Are source information and/or research findings evaluated in terms of the argument?

Critical analysis is spread throughout assignments in small critical comments and is also usually the major focus of the discussion or conclusion. To help you critically analyse source information or your own findings, ask questions like why?   how?  and  so what?

  • Why did paper A find different results to papers B and C?
  • How might this information influence policy? 
  • Plant A grew faster than Plant B - so what?

assignment assessment criteria

This area focuses on the structure and organisation of the assignment, which is important to help the reader follow the argument easily.

  • Is the overall structure and argument clear?
  • Are points logically ordered to create the argument?
  • Are paragraphs structured clearly, with one central idea?
  • Are ideas linked smoothly within and between paragraphs?
  • Does the work meet structural requirements for the type of writing (eg, does a report include section headings)?

Style & presentation

This area is about the surface aspects of your work - how does it look and feel?

Referencing style

  • Are citations and references formatted correctly in the required referencing style?
  • Is all source information acknowledged appropriately?

You can find examples of correctly formatted citations and references in our  practical guide to referencing styles .

Presentation

  • Are the format and presentation appropriate to the task?
  • Have formatting guidelines or requirements been followed?
  • Is formatting consistent?

Use of language

  • Is an academic writing style used that is appropriate to the assignment?
  • Is spelling, grammar and punctuation correct?
  • Is the writing clear and concise?

What to do with assessment criteria?

decorative

Find your assessment criteria

Your module learning outcomes and assessment criteria could be found in your programme handbook, an induction or module VLE site, or the module catalogue.

assignment assessment criteria

Read & understand the criteria

You need to know what markers' expectations are, so read the criteria carefully. Consider: 

  • Which key areas is your work marked on?
  • What are the expectations at different grade bands?

Apply the criteria

You can use the criteria in a few ways to help improve your assignments:

  • Evaluate example assignments using the criteria - why were they awarded that grade?
  • Use with feedback on previous assignments to help identify strengths and areas to improve (also see the Feedback box).
  • Use them while writing assignments to remind you what markers are looking for.

If you have questions about the marking criteria, ask your module tutor, your academic supervisor, or book a Writing Centre appointment .

View this information in a new window: Assessment criteria [Google Doc]

Using feedback to improve writing

Feedback comes in many forms:

  • written comments about your work
  • highlighting sections of the assessment criteria
  • verbally in an audio or video clip, or in a conversation
  • model answers to compare your work to
  • advice on common issues (especially for formative work)

decorative

Feedback is an opportunity to develop and improve. It's not always nice to read, but  don't ignore it !

Feedback can...

...show what you've done well.

The essay is structured well and your argument is easy to follow.

Your literature review comprises highly relevant studies, which you link back to in your discussion of the results.

Overall, you demonstrate a good understanding of the quantitative statistical analyses you have conducted.

These comments help you understand the strengths of your writing. For example, If you know that your structure was clear, you can use the same approach in your next assignments.

Also, they make you feel good - give yourself a pat on the back!

...identify areas for improvement

You raise some (potentially) interesting points in your essay, but these are very often not supported with a reference to relevant evidence.

The third paragraph is quite off-topic, and doesn't help you answer the essay question.

Your arguments are not clear in several places due to errors in grammar and wording.

It might not be very nice to hear about aspects of your work that are not so good, but these comments are the most useful to help you improve!

This type of comment often includes phrases like:

  • [something positive], but/however [something to improve]
  • make sure you...
  • pay attention to...
  • you could have...
  • it would have helped to...
  • [some part of the assignment] could be more...

See the sections below for tips on dealing with some common problems identified in feedback.

...help extend your understanding

What might be the implications of this policy for schools?

How far could your findings be generalised to other working environments?

You mention some theories of aggression, but don't go into detail - why are these particular theories relevant?

Comments like this can give you ideas to consider or further reading to help deepen your knowledge on a topic. This is especially useful if you want to explore this topic further in later modules or your dissertation. They could also highlight gaps in your understanding. In this case, it might be worthwhile to go back and revise the module content to help you in your later modules.

What to do with feedback?

1. read, 2. reflect, 3. do something!

Here's a handy three-step process to use your feedback to write better assignments. 

  • Read the feedback carefully. What are you doing well? What do you need to improve?
  • Look back at your assignment - can you find the good things and issues mentioned?
  • If the feedback isn't clear, ask your tutor to explain or book a Writing Centre appointment  to discuss it.
  • Look for patterns in feedback across assignments to identify what to focus on.
  • For areas to improve on, make a plan to address this. For example, if the structure needs to be clearer, you could spend more time planning next time.
  • For things you're doing well, how can you apply this to future assignments?

Do something!

  • Apply your plans from the Reflect stage when you write your next assignments.
  • When you get the feedback for these assignments, think about what you did differently - have you addressed the issues?

And then the cycle begins again!

You can use this template to review your feedback to identify areas for improvement and work out what you need to focus on:

Google Doc

Common feedback issues & how to avoid them

The issues below are often identified in feedback. Open each for advice on how to address them in your next assignments. Some are easier to deal with than others!

Proofreading, spelling and grammar errors

Work on improving the clarity of your writing, including grammar and choice of words.

There are numerous errors in spelling and grammar, eg 'the survey contain 10 questions'.

Make sure to proofread your work carefully, there are lots of typos.

Feedback like this is very common, as the mistakes are easy for markers to spot.

It shows you need to check your work carefully for small mistakes or typos in spelling, grammar, punctuation or word choice before you submit. This checking is often called ' proofreading '.

More detail & advice:

Google Doc

Referencing style errors

The format of your references in the text does not always follow the APA format. 

There are some instances of citing and referencing format being incorrect.

Pay attention to following Harvard style correctly.

Each referencing style has specific formatting requirements for in-text citations, footnotes and the reference list/bibliography (as used in that style). For example, in APA style (Tanaka & Smith, 2007) is correctly formatted, but (Tanaka and Smith 2007) is not. These can seem like small details, but they're very important to get right! 

To avoid making referencing style errors, check all of your citations and references carefully before you submit. Common errors include:

  • incorrect author names or missing out authors
  • missing out some information needed in the reference
  • not using the correct punctuation and text formatting, especially full stops, commas, ampersand (&) and italics
  • putting an in-text citation outside the sentence instead of before the full stop
  • citing a source in the text, but not including it in the reference list (or vice versa)

You can find examples of correctly formatted citations and references for each style in our practical guide to referencing styles . You can also use reference management software to generate citations and references from your sources - this can save a lot of time! They're not always 100% correct though, so you'll need to check them still.

assignment assessment criteria

More critical analysis/stronger argument needed

The style is often somewhat descriptive, and you could have added more critical discussion of the findings.

Relevant texts are reviewed, but there is only limited criticality towards their content.

The argument lacks development, with minimal indication of original thought.

Comments like this suggest that your writing focuses on the more descriptive processes - remembering, understanding and applying. To access higher marks, you also need to demonstrate more critical skills - analysis, evaluation and processing. This is considering what the information means in terms of your argument.

More information on what criticality is and how to add it to your work: 

Structure isn't clear

The argument wasn't always clear, so a more logical structure to the assignment would have helped.

Your results section could be more organised - it seems like you just report everything you found.

Some paragraphs contain unrelated information, which is a bit distracting.

Assignments need a clear overall structure. This usually means a linear structure, where paragraphs have one central idea and build on each other towards the final argument. You can think of this as a flight of stairs going from your title at the bottom to your conclusion at the top. A key way to improve your structure is to plan out your points and arrange them before you start writing.

Cohesion is also an important part of structure. There are lots of words and phrases that you can use to link your ideas more clearly. These phrases don't really add any information, but they make the links clear to the reader so it's easier for them to follow your argument

For more in-depth information, see our advice on structuring academic writing:

Writing style isn't academic enough

Very long sentences sometimes make the essay difficult to follow.

Personal remarks are inadequate for developing an academic argument.

There are numerous colloquialisms (‘the next thing to do’, ‘pour their knowledge into our heads’) in this work.

Academic writing uses a very different style to other types of writing. Some of the key features are:

  • being clear and concise
  • using neutral words and avoiding informal, conversational or colloquial language
  • avoiding personal language

For more in-depth information, see our advice on academic writing style:

Assignment doesn't meet task requirements

Your literature review is quite good, but the remainder of the assignment is not in the style of a research report.

The first part of the essay is relevant to the question, but the second half is largely off-topic.

You haven’t done any analysis, which was a central part of this assessment.

To get a good mark, you have to complete the assignment that was set! This means answering all parts of the task, staying relevant and using an appropriate structure and style. Make sure to read the assessment brief carefully to find out what you need to do.

For more information see our advice on understanding task requirements:

Use of source information / course content

There is heavy reliance on just a few references, which all come from the module reading list.

There are some up to date and relevant sources, but you could have used more recent references.

Your sources all support your argument. What about research with different findings?

In most assignments, you need to refer to information in sources to support your argument - without evidence from sources, your points are just your opinion, and it's very difficult to show any critical analysis.

Make sure you:

  • use sources that are credible, up-to-date and relevant to the task.
  • don't rely on just a couple of sources, as this will limit your argument.
  • extend module content and find some sources yourself.
  • don't ignore sources with different findings/points - be complete in your argument.

Find on choosing suitable sources and how to find them on these pages:

assignment assessment criteria

View this information in a new window: Using feedback to improve writing [Google Doc]

  • << Previous: Referencing
  • Next: Tools >>
  • Last Updated: Apr 3, 2024 4:02 PM
  • URL: https://subjectguides.york.ac.uk/academic-writing

College of Business

Creating and using rubrics for assignments, what is a rubric.

A rubric is a guide that articulates the expectations for an assignment and communicates the level of quality of performance or learning. As an assessment tool, a rubric sets the criteria for evaluating performance or work completed in a course or program. A rubric can communicate the expectations for learning and provide a framework for instructors to make decisions about instruction.

Rubrics are used for both formative assessment (in-process feedback to be used for improvement) and summative assessment (evaluation of student learning at the conclusion of an assignment or project). Essentially, a rubric is a tool for communication between instructor and student.Rubrics promote good practice in:

Communication : A rubric creates a common framework and clear expectations

Consistency and Fairness : Same criteria and standards across students and reviewers/graders

Transparency : Progress and grades are clear, reduces mystery

Faster Assessment : Assessment and evaluation can be done more efficiently

Identifying Strengths and Weaknesses : Shows where students are doing well and where they need more support (Is it a ‘B’ paper all the way through?)

Objective Criteria : Rubrics are criterion-referenced, rather than norm-referenced. Raters ask, “Did the student meet the criteria for level 5 of the rubric?” rather than “How well did this student do compared to other students?”

Two Main Types of Rubrics Used in Course Assessment

Analytic rubric, holistic rubric, good practices for creating rubrics.

In its simplest form, a rubric includes these things:

  • A task description: The activity, assignment, performance, or presentation being assessed.
  • The outcomes or dimensions to be rated (rows): The skills, knowledge, and/or behavior to be demonstrated. Specify the skills, knowledge, and/or behaviors that you will be looking for that are most important to the assignment.
  • Not meeting, approaching, meeting, exceeding
  • Exemplary, proficient, marginal, unacceptable
  • Advanced, intermediate high, intermediate, novice
  • Complete, partial, minimal, none
  • Letter grades (A, B, C, D, F)
  • Top category: The best work you expect using these characteristics 
  • Lowest category: An unacceptable product
  • Descriptions of intermediate-level products 
  • Make sure that the language from column to column is similar, that syntax and wording are aligned
  • Use specific descriptions, avoiding words like “good” and “excellent”
  • Start your list of outcomes with the content, ideas, and arguments, then organization, grammar, and citation (if being evaluated)

Example of an analytic rubric used for scoring an essay

Assigning Points or Grades to the Performance Levels

  • Use the assignment instructions as a guide
  • Determine the total possible points for the assignment 
  • Decide on the distribution/percentage of criteria and divide the points accordingly. It’s helpful to put the points/percentages directly in the box with the performance descriptions.
  • Incorporate some flexibility by using a range of points for each performance level
  • You can choose to break down the points OR
  • Assign an overall letter grade and use the rubric to identify strengths and points for improvement

Approaching Student Work

  • Do a quick read of a few papers to get a sense of the range
  • When grading a paper, start by comparing the work to the highest-level performance description. If the work meets that description, assign the work to that level. If not, move on to the next-highest performance description and so on.
  • Mark the performance level for each criterion. Circle, check, or note specific concepts in the rubric (where technology allows)
  • If you are not sure which performance level to choose between two levels, look for consistent or overall performance across the sample and rate accordingly
  • Add the scores for an overall grade (determine how points convert to grades)
  • Provide brief notes to the student on specific areas of accomplishment or need for improvement, using the rubric to illustrate your points

Checking the Rubric

  • After you have graded a few pieces of work, review the overall grades. Look at the high, middle, and low grades. Does it seem like the overall grades are working out appropriately? 
  • If there is a mis-match, make adjustments to the rubric and re-grade the first few pieces of work as necessary

Good Practices for Using Rubrics*

Use student-friendly language.

Use language that is appropriate to the level of the course and your students. If you are using academic or disciplinary language, make sure you spend time teaching and practicing the concepts.

Share the Rubric with Students

Share the rubric with the assignment prompt so that students are familiar with your expectations. This should help students master your learning outcomes by guiding their work in appropriate directions.

Develop the Rubric with Students

Students can monitor themselves and their peers using agreed-upon criteria that they help develop. Have students apply your rubric to sample products before they create their own. The ability to evaluate, edit, and improve draft documents is an important skill.

Use the Rubric to Grade Student Work

Use the rubric to grade student work and return the rubric with the grading on it. Faculty save time writing extensive comments by marking relevant segments of the rubric. Some instructors include space for additional comments on the rubric, either within each section or at the end.

Use the Rubric for Peer Review

Have students exchange paper drafts and give peer feedback using the rubric. Then, give students time to revise before submitting the final draft to you. You might also require that they turn in the draft and peer-scored rubric with their final paper.

Use the Rubric for Student Self-Assessment

Students assess their own work using the rubric and submit the rubric with their assignment. This is a great basis for deep discussion about which aspects they can improve.

*This content was adapted with gratitude from work done by the University of Hawai’i at Mānoa Office of Assessment, 2018

Selected Resources

Selke, M. J. G. (2013). Rubric assessment goes to college: Objective, comprehensive evaluation of student work. Lanham, MD: Rowman & Littlefield.

Stevens, D. D., & Levi, A. J. (2013). Introduction to rubrics: An assessment tool to save grading time, convey effective feedback, and promote student learning (second edition). Sterling, VA: Stylus.

University of Wisconsin–Madison Examples & Resources

Know Your Terms: Holistic, Analytic, and Single-Point Rubrics (Cult of Pedagogy post)

  • Columbia University in the City of New York
  • Office of Teaching, Learning, and Innovation
  • University Policies
  • Columbia Online
  • Academic Calendar
  • Resources and Technology
  • Instructional Technologies
  • Teaching in All Modalities

Designing Assignments for Learning

The rapid shift to remote teaching and learning meant that many instructors reimagined their assessment practices. Whether adapting existing assignments or creatively designing new opportunities for their students to learn, instructors focused on helping students make meaning and demonstrate their learning outside of the traditional, face-to-face classroom setting. This resource distills the elements of assignment design that are important to carry forward as we continue to seek better ways of assessing learning and build on our innovative assignment designs.

On this page:

Rethinking traditional tests, quizzes, and exams.

  • Examples from the Columbia University Classroom
  • Tips for Designing Assignments for Learning

Reflect On Your Assignment Design

Connect with the ctl.

  • Resources and References

assignment assessment criteria

Cite this resource: Columbia Center for Teaching and Learning (2021). Designing Assignments for Learning. Columbia University. Retrieved [today’s date] from https://ctl.columbia.edu/resources-and-technology/teaching-with-technology/teaching-online/designing-assignments/

Traditional assessments tend to reveal whether students can recognize, recall, or replicate what was learned out of context, and tend to focus on students providing correct responses (Wiggins, 1990). In contrast, authentic assignments, which are course assessments, engage students in higher order thinking, as they grapple with real or simulated challenges that help them prepare for their professional lives, and draw on the course knowledge learned and the skills acquired to create justifiable answers, performances or products (Wiggins, 1990). An authentic assessment provides opportunities for students to practice, consult resources, learn from feedback, and refine their performances and products accordingly (Wiggins 1990, 1998, 2014). 

Authentic assignments ask students to “do” the subject with an audience in mind and apply their learning in a new situation. Examples of authentic assignments include asking students to: 

  • Write for a real audience (e.g., a memo, a policy brief, letter to the editor, a grant proposal, reports, building a website) and/or publication;
  • Solve problem sets that have real world application; 
  • Design projects that address a real world problem; 
  • Engage in a community-partnered research project;
  • Create an exhibit, performance, or conference presentation ;
  • Compile and reflect on their work through a portfolio/e-portfolio.

Noteworthy elements of authentic designs are that instructors scaffold the assignment, and play an active role in preparing students for the tasks assigned, while students are intentionally asked to reflect on the process and product of their work thus building their metacognitive skills (Herrington and Oliver, 2000; Ashford-Rowe, Herrington and Brown, 2013; Frey, Schmitt, and Allen, 2012). 

It’s worth noting here that authentic assessments can initially be time consuming to design, implement, and grade. They are critiqued for being challenging to use across course contexts and for grading reliability issues (Maclellan, 2004). Despite these challenges, authentic assessments are recognized as beneficial to student learning (Svinicki, 2004) as they are learner-centered (Weimer, 2013), promote academic integrity (McLaughlin, L. and Ricevuto, 2021; Sotiriadou et al., 2019; Schroeder, 2021) and motivate students to learn (Ambrose et al., 2010). The Columbia Center for Teaching and Learning is always available to consult with faculty who are considering authentic assessment designs and to discuss challenges and affordances.   

Examples from the Columbia University Classroom 

Columbia instructors have experimented with alternative ways of assessing student learning from oral exams to technology-enhanced assignments. Below are a few examples of authentic assignments in various teaching contexts across Columbia University. 

  • E-portfolios: Statia Cook shares her experiences with an ePorfolio assignment in her co-taught Frontiers of Science course (a submission to the Voices of Hybrid and Online Teaching and Learning initiative); CUIMC use of ePortfolios ;
  • Case studies: Columbia instructors have engaged their students in authentic ways through case studies drawing on the Case Consortium at Columbia University. Read and watch a faculty spotlight to learn how Professor Mary Ann Price uses the case method to place pre-med students in real-life scenarios;
  • Simulations: students at CUIMC engage in simulations to develop their professional skills in The Mary & Michael Jaharis Simulation Center in the Vagelos College of Physicians and Surgeons and the Helene Fuld Health Trust Simulation Center in the Columbia School of Nursing; 
  • Experiential learning: instructors have drawn on New York City as a learning laboratory such as Barnard’s NYC as Lab webpage which highlights courses that engage students in NYC;
  • Design projects that address real world problems: Yevgeniy Yesilevskiy on the Engineering design projects completed using lab kits during remote learning. Watch Dr. Yesilevskiy talk about his teaching and read the Columbia News article . 
  • Writing assignments: Lia Marshall and her teaching associate Aparna Balasundaram reflect on their “non-disposable or renewable assignments” to prepare social work students for their professional lives as they write for a real audience; and Hannah Weaver spoke about a sandbox assignment used in her Core Literature Humanities course at the 2021 Celebration of Teaching and Learning Symposium . Watch Dr. Weaver share her experiences.  

​Tips for Designing Assignments for Learning

While designing an effective authentic assignment may seem like a daunting task, the following tips can be used as a starting point. See the Resources section for frameworks and tools that may be useful in this effort.  

Align the assignment with your course learning objectives 

Identify the kind of thinking that is important in your course, the knowledge students will apply, and the skills they will practice using through the assignment. What kind of thinking will students be asked to do for the assignment? What will students learn by completing this assignment? How will the assignment help students achieve the desired course learning outcomes? For more information on course learning objectives, see the CTL’s Course Design Essentials self-paced course and watch the video on Articulating Learning Objectives .  

Identify an authentic meaning-making task

For meaning-making to occur, students need to understand the relevance of the assignment to the course and beyond (Ambrose et al., 2010). To Bean (2011) a “meaning-making” or “meaning-constructing” task has two dimensions: 1) it presents students with an authentic disciplinary problem or asks students to formulate their own problems, both of which engage them in active critical thinking, and 2) the problem is placed in “a context that gives students a role or purpose, a targeted audience, and a genre.” (Bean, 2011: 97-98). 

An authentic task gives students a realistic challenge to grapple with, a role to take on that allows them to “rehearse for the complex ambiguities” of life, provides resources and supports to draw on, and requires students to justify their work and the process they used to inform their solution (Wiggins, 1990). Note that if students find an assignment interesting or relevant, they will see value in completing it. 

Consider the kind of activities in the real world that use the knowledge and skills that are the focus of your course. How is this knowledge and these skills applied to answer real-world questions to solve real-world problems? (Herrington et al., 2010: 22). What do professionals or academics in your discipline do on a regular basis? What does it mean to think like a biologist, statistician, historian, social scientist? How might your assignment ask students to draw on current events, issues, or problems that relate to the course and are of interest to them? How might your assignment tap into student motivation and engage them in the kinds of thinking they can apply to better understand the world around them? (Ambrose et al., 2010). 

Determine the evaluation criteria and create a rubric

To ensure equitable and consistent grading of assignments across students, make transparent the criteria you will use to evaluate student work. The criteria should focus on the knowledge and skills that are central to the assignment. Build on the criteria identified, create a rubric that makes explicit the expectations of deliverables and share this rubric with your students so they can use it as they work on the assignment. For more information on rubrics, see the CTL’s resource Incorporating Rubrics into Your Grading and Feedback Practices , and explore the Association of American Colleges & Universities VALUE Rubrics (Valid Assessment of Learning in Undergraduate Education). 

Build in metacognition

Ask students to reflect on what and how they learned from the assignment. Help students uncover personal relevance of the assignment, find intrinsic value in their work, and deepen their motivation by asking them to reflect on their process and their assignment deliverable. Sample prompts might include: what did you learn from this assignment? How might you draw on the knowledge and skills you used on this assignment in the future? See Ambrose et al., 2010 for more strategies that support motivation and the CTL’s resource on Metacognition ). 

Provide students with opportunities to practice

Design your assignment to be a learning experience and prepare students for success on the assignment. If students can reasonably expect to be successful on an assignment when they put in the required effort ,with the support and guidance of the instructor, they are more likely to engage in the behaviors necessary for learning (Ambrose et al., 2010). Ensure student success by actively teaching the knowledge and skills of the course (e.g., how to problem solve, how to write for a particular audience), modeling the desired thinking, and creating learning activities that build up to a graded assignment. Provide opportunities for students to practice using the knowledge and skills they will need for the assignment, whether through low-stakes in-class activities or homework activities that include opportunities to receive and incorporate formative feedback. For more information on providing feedback, see the CTL resource Feedback for Learning . 

Communicate about the assignment 

Share the purpose, task, audience, expectations, and criteria for the assignment. Students may have expectations about assessments and how they will be graded that is informed by their prior experiences completing high-stakes assessments, so be transparent. Tell your students why you are asking them to do this assignment, what skills they will be using, how it aligns with the course learning outcomes, and why it is relevant to their learning and their professional lives (i.e., how practitioners / professionals use the knowledge and skills in your course in real world contexts and for what purposes). Finally, verify that students understand what they need to do to complete the assignment. This can be done by asking students to respond to poll questions about different parts of the assignment, a “scavenger hunt” of the assignment instructions–giving students questions to answer about the assignment and having them work in small groups to answer the questions, or by having students share back what they think is expected of them.

Plan to iterate and to keep the focus on learning 

Draw on multiple sources of data to help make decisions about what changes are needed to the assignment, the assignment instructions, and/or rubric to ensure that it contributes to student learning. Explore assignment performance data. As Deandra Little reminds us: “a really good assignment, which is a really good assessment, also teaches you something or tells the instructor something. As much as it tells you what students are learning, it’s also telling you what they aren’t learning.” ( Teaching in Higher Ed podcast episode 337 ). Assignment bottlenecks–where students get stuck or struggle–can be good indicators that students need further support or opportunities to practice prior to completing an assignment. This awareness can inform teaching decisions. 

Triangulate the performance data by collecting student feedback, and noting your own reflections about what worked well and what did not. Revise the assignment instructions, rubric, and teaching practices accordingly. Consider how you might better align your assignment with your course objectives and/or provide more opportunities for students to practice using the knowledge and skills that they will rely on for the assignment. Additionally, keep in mind societal, disciplinary, and technological changes as you tweak your assignments for future use. 

Now is a great time to reflect on your practices and experiences with assignment design and think critically about your approach. Take a closer look at an existing assignment. Questions to consider include: What is this assignment meant to do? What purpose does it serve? Why do you ask students to do this assignment? How are they prepared to complete the assignment? Does the assignment assess the kind of learning that you really want? What would help students learn from this assignment? 

Using the tips in the previous section: How can the assignment be tweaked to be more authentic and meaningful to students? 

As you plan forward for post-pandemic teaching and reflect on your practices and reimagine your course design, you may find the following CTL resources helpful: Reflecting On Your Experiences with Remote Teaching , Transition to In-Person Teaching , and Course Design Support .

The Columbia Center for Teaching and Learning (CTL) is here to help!

For assistance with assignment design, rubric design, or any other teaching and learning need, please request a consultation by emailing [email protected]

Transparency in Learning and Teaching (TILT) framework for assignments. The TILT Examples and Resources page ( https://tilthighered.com/tiltexamplesandresources ) includes example assignments from across disciplines, as well as a transparent assignment template and a checklist for designing transparent assignments . Each emphasizes the importance of articulating to students the purpose of the assignment or activity, the what and how of the task, and specifying the criteria that will be used to assess students. 

Association of American Colleges & Universities (AAC&U) offers VALUE ADD (Assignment Design and Diagnostic) tools ( https://www.aacu.org/value-add-tools ) to help with the creation of clear and effective assignments that align with the desired learning outcomes and associated VALUE rubrics (Valid Assessment of Learning in Undergraduate Education). VALUE ADD encourages instructors to explicitly state assignment information such as the purpose of the assignment, what skills students will be using, how it aligns with course learning outcomes, the assignment type, the audience and context for the assignment, clear evaluation criteria, desired formatting, and expectations for completion whether individual or in a group.

Villarroel et al. (2017) propose a blueprint for building authentic assessments which includes four steps: 1) consider the workplace context, 2) design the authentic assessment; 3) learn and apply standards for judgement; and 4) give feedback. 

References 

Ambrose, S. A., Bridges, M. W., & DiPietro, M. (2010). Chapter 3: What Factors Motivate Students to Learn? In How Learning Works: Seven Research-Based Principles for Smart Teaching . Jossey-Bass. 

Ashford-Rowe, K., Herrington, J., and Brown, C. (2013). Establishing the critical elements that determine authentic assessment. Assessment & Evaluation in Higher Education. 39(2), 205-222, http://dx.doi.org/10.1080/02602938.2013.819566 .  

Bean, J.C. (2011). Engaging Ideas: The Professor’s Guide to Integrating Writing, Critical Thinking, and Active Learning in the Classroom . Second Edition. Jossey-Bass. 

Frey, B. B, Schmitt, V. L., and Allen, J. P. (2012). Defining Authentic Classroom Assessment. Practical Assessment, Research, and Evaluation. 17(2). DOI: https://doi.org/10.7275/sxbs-0829  

Herrington, J., Reeves, T. C., and Oliver, R. (2010). A Guide to Authentic e-Learning . Routledge. 

Herrington, J. and Oliver, R. (2000). An instructional design framework for authentic learning environments. Educational Technology Research and Development, 48(3), 23-48. 

Litchfield, B. C. and Dempsey, J. V. (2015). Authentic Assessment of Knowledge, Skills, and Attitudes. New Directions for Teaching and Learning. 142 (Summer 2015), 65-80. 

Maclellan, E. (2004). How convincing is alternative assessment for use in higher education. Assessment & Evaluation in Higher Education. 29(3), June 2004. DOI: 10.1080/0260293042000188267

McLaughlin, L. and Ricevuto, J. (2021). Assessments in a Virtual Environment: You Won’t Need that Lockdown Browser! Faculty Focus. June 2, 2021. 

Mueller, J. (2005). The Authentic Assessment Toolbox: Enhancing Student Learning through Online Faculty Development . MERLOT Journal of Online Learning and Teaching. 1(1). July 2005. Mueller’s Authentic Assessment Toolbox is available online. 

Schroeder, R. (2021). Vaccinate Against Cheating With Authentic Assessment . Inside Higher Ed. (February 26, 2021).  

Sotiriadou, P., Logan, D., Daly, A., and Guest, R. (2019). The role of authentic assessment to preserve academic integrity and promote skills development and employability. Studies in Higher Education. 45(111), 2132-2148. https://doi.org/10.1080/03075079.2019.1582015    

Stachowiak, B. (Host). (November 25, 2020). Authentic Assignments with Deandra Little. (Episode 337). In Teaching in Higher Ed . https://teachinginhighered.com/podcast/authentic-assignments/  

Svinicki, M. D. (2004). Authentic Assessment: Testing in Reality. New Directions for Teaching and Learning. 100 (Winter 2004): 23-29. 

Villarroel, V., Bloxham, S, Bruna, D., Bruna, C., and Herrera-Seda, C. (2017). Authentic assessment: creating a blueprint for course design. Assessment & Evaluation in Higher Education. 43(5), 840-854. https://doi.org/10.1080/02602938.2017.1412396    

Weimer, M. (2013). Learner-Centered Teaching: Five Key Changes to Practice . Second Edition. San Francisco: Jossey-Bass. 

Wiggins, G. (2014). Authenticity in assessment, (re-)defined and explained. Retrieved from https://grantwiggins.wordpress.com/2014/01/26/authenticity-in-assessment-re-defined-and-explained/

Wiggins, G. (1998). Teaching to the (Authentic) Test. Educational Leadership . April 1989. 41-47. 

Wiggins, Grant (1990). The Case for Authentic Assessment . Practical Assessment, Research & Evaluation , 2(2). 

Wondering how AI tools might play a role in your course assignments?

See the CTL’s resource “Considerations for AI Tools in the Classroom.”

This website uses cookies to identify users, improve the user experience and requires cookies to work. By continuing to use this website, you consent to Columbia University's use of cookies and similar technologies, in accordance with the Columbia University Website Cookie Notice .

Library Glion

How to use assessment criteria

What are assessment criteria, three ways of using assessment criteria, how to write assessment criteria, steps in writing assessment criteria.

Some key point to keep in mind

assignment assessment criteria

Acknowledgement: This document consists of relevant extracts from:

Gosling, D., & Moon, J. (2001). How to use learning outcomes and assessment criteria . SEEC.

Assessment criteria specify how student performance in respect of the course learning outcomes are to be recognised. They are statements which specify the standards that must be met and what evidence will be taken to show achievement of learning outcomes. Assessment criteria have to be understood relative to the definition of the level of the course and the course learning outcomes.

Assessment (or performance) criteria provide clear indications of how achievement may be demonstrated’ (InCCA, 1998, p.36).

The purpose of assessment criteria is to establish clear and unambiguous standards of achievement in respect to each learning outcome. They should describe what the learner is expected to do,  in order to demonstrate that the learning outcome has been achieved (NICATS, 1998, p.37).

Assessment criteria: Descriptions of what the learner is expected to do, in order to demonstrate that a learning outcome has been achieved (CQFW et al., 2001)

The assessment criteria are not to be confused with the assessment tasks themselves, e.g. to design x, or write an essay about y. The assessment criteria specify how the task e.g. the design, the essay, the project, the dissertation will be judged.

Three elements:

Learning outcome :

By the end of the module students will be expected to be able to design a page layout to a given brief.

Method of assessment :

Lay out the information attached as a book cover using the following publisher’s brief.

Assessment criteria :

– clarity of chosen font

– appropriate colour combinations

– attractiveness of the design

– match with publisher’s brief within stated budget

Assessment criteria may be used in three ways. They may specify:

  • Threshold standards: Description of what the learner is expected to do, in order to demonstrate satisfactory achievement of learning outcomes.
  • Grading criteria: what is required for achievement of each of the grades being awarded e.g., for a pass at 50%, for grades of 60%, 70%, 80% and so.
  • General criteria: a template of characteristics or qualities against which the student’s performance of the assessment task will be judged.

At GIHE, assessment criteria are used to specify grading criteria.

When writing assessment criteria many of the points to be kept in mind are the same as for writing learning outcomes. Clarity and brevity are important and as far as possible, ambiguity should be avoided. The language too should be clear to all those who will use the criteria – students and staff. The criteria must be capable of being measured or assessed in a valid and reliable way. The criteria are concerned with essential aspects of performance for the achievement of a pass or the specified grade.

  • Consider the learning outcome being tested (for example: demonstrate a critical awareness of Luxury Industry trends).
  • Consider the assessment task set (for example: make a presentation of an analysis and recommendations about a specific Luxury Industry trend. )
  • Brainstorm requirements for, or attributes of, successful performance of the assessment task (for example: requirements for a satisfactory presentation clarity, fluency, appropriate to audience etc.; requirements for satisfactory demonstration of ‘critical awareness’ for example, knowledge of different theories, application of theory to the case study, evidence of personal argument…)
  • If necessary, specify the range to clarify contextual factors and the level (for example: which theories students are expected to refer to, which types of urban issues they can be expected to deal with).
  • Focus on what is essential and categorise the requirements or attributes into clearly worded criteria.
  • Check that the criteria are measurable or assessable in valid and reliable ways and that the  criteria are clear and unambiguous (for example: have another colleague read the criteria to see if s/he interprets them in the same way as you do).
  • Repeat steps 3, 4, 5 until you are fully satisfied

Some key points to keep in mind

Assessment criteria should be written with the following factors in mind:

the published aims of a programme

learning outcomes for the module

the level at which the criteria will apply

the nature of the discipline or subject area

comparability of standards with equivalent degree programmes elsewhere

the nature of the assessment task

All of these are important, so let us examine each in a little more detail:

The criteria should reflect what has been published about the overall aims of the programme . If it has been said, for example, that the programme  will enable students ‘to Identify, analyze and solve a range of complex problems using both recognized and innovative tools and evidence ’ then the criteria within relevant courses will need to indicate what will constitute ‘recognized and innovative tools’.

The criteria should be informed by the published learning outcomes of the course. If, for example, the module has as a learning outcome that students will be able to ‘Critically interpret a firm’s financial information within its business and competitive environment’ then the criteria should make it clear that an analysis of a specific ‘business and competitive environment’ will be expected and reflected in the marking scheme. However, note that the assessment criteria should expand on the information provided in the learning outcome and not be repetitive (InCCA, 1998 p.36).

Assessment criteria should reflect the level of the course. Thus, because FHEQ level four of higher education is less demanding than FHEQ level five, the assessment criteria will need to reflect this fact. Higher level learning will be reflected in the cognitive action verbs chosen to describe appropriate performance – for level five, rather than descriptive words such as ‘outline’ or ‘define’ (from the ‘understand’ level of Bloom’s taxonomy) use ‘differentiate’, ‘examine’, ‘organise’ (from the ‘apply’ level of Bloom’s taxonomy).

Each subject or discipline has distinctive epistemological characteristics, which will be reflected in the kinds of criteria written for that subject / discipline. Such characteristics might be demonstrated in the type of enquiry in which students are engaged, the type of evidence on which they will draw and the type of activities on which they will be assessed. GIHE assessment criteria may reflect the epistemological characteristics of management or business.

It is important that the assessment criteria are comparable to standards expected on similar degree programmes elsewhere. Whilst each programme will have its own distinctive characteristics the overall standard should be comparable with other degree or diploma or postgraduate awards in other comparable institutions. Reference to the European, Swiss and UK HE qualification frameworks, qualification descriptors, level descriptors and subject benchmarks will help here.

The assessment criteria need to relate to the specific requirements of the assessment task. For example, the criteria for a good oral presentation are different from those for a written assignment and the criteria for a good business report will differ from those for a reflection. Equally the criteria for a group work project task need to be different in some respects from the criteria for an individual piece of work. Make sure the criteria describe the performance required for the task set.

CQFW, NICATS, NUCCAT, SEEC. (2001). Credit and HE Qualifications: Credit guidelines for HE Qualifications in England, Wales, and Northern Ireland . CQFW.

InCCA. (1998). A Common Framework for Learning. DfEE.

NICAT. (1999). A manual for the Northern Ireland credit framework. NICAT.

Logo for University of Southern Queensland

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

Managing Assessment

Cristy Bartlett; Kate Derrington; and Anbarasu Thangavelu

Aboriginal dot artwork. This piece tells the story of how, with the right planning and organisation (represented by the pink and orange community gathering in the top left-hand corner), students can utilise the learning tools they have developed during their studies (represented by the royal blue rocks) to overcome assessment challenges (represented by the diagonal watercourse). ‘”Greener pastures” (represented by the green dotwork) can be achieved with the right assessment planning

Baanda-li means to “put in order’” or “organise” in the Yuwaalaraay/ Euahlayi dialectic of the Gamilaraay language. This piece tells the story of how, with the right planning and organisation (represented by the pink and orange community gathering in the top left-hand corner), students can utilise the learning tools they have developed during their studies (represented by the royal blue rocks) to overcome assessment challenges (represented by the diagonal watercourse). ‘”Greener pastures” (represented by the green dotwork) can be achieved with the right assessment planning. Kc Rae, Aboriginal artist.

Introduction

It is not uncommon to feel confused or a bit overwhelmed when approaching assessment at university. This chapter is designed to guide you through the process of assessment management and provide strategies to help make your assessment preparation less stressful. When relevant, we will refer you to other sections in this book where specific skills are discussed in detail.

We start by discussing what to do with the information you receive about your assessment items. It is important to know exactly what your requirements are before beginning to research or write. Spending a little extra time at the start, analysing your question thoroughly and reading your criteria sheet, will make the assignment research and writing processes easier, and will save you time. The next section covers assessment planning and outlines how planning can save you time and effort. We discuss key components of assessment preparation, including finding information, writing, revising, and submitting your assessments. Your assessment process doesn’t finish when you submit. We describe how to use your feedback to help you improve your understanding of the material and to improve future submissions.

We then briefly discuss academic integrity and what it means for university students before concluding the chapter and providing key points on managing your assessments. At the end of this chapter you should have a good understanding of some strategies and techniques for managing your assessments at university.

Understanding your assessment task

In this section we discuss the different types of information that you may receive about your assessment items and how to interpret them. Other chapters in this book provide more detailed information about specific types of assessment items, such as the Presentations chapter about presentation assessments. It can be tempting to skip over reading the assessment information and planning stages, however these steps will save you time. Knowing exactly what the task is, and is not, will ensure that your activities are focused. Similarly, spending some time planning how you will approach the task and the structure will help you to avoid spending time on activities that don’t contribute to your final submitted work. We will first discuss the information that you will find in your assessment task sheets.

Task Sheets

For most assessment tasks you will be provided with a task sheet. The task sheet (also known as an assignment information sheet, assessment outline or task brief), provides information about your assessment task. It is important to read this carefully, as it includes the key information required to undertake your assessment. In this section we outline the main components of a task sheet and how to interpret them, starting with the topic words, task words, and limiting words. These terms are defined in the table below (see Table 18.1 ).

Table 18.1 Parts of an Assignment Question

Below is an example of what an assessment task looks like. Notice the use of topic words (coloured orange), limiting words (coloured blue) and task or directive words (coloured green). Consider what a topic and task analysis look like in this example question.

Conducting a topic analysis is important to develop effective, targeted key words to search for assignment resources. You can also use this approach for essay questions in an exam to make sure you are on track with your response. You can then use these words and their synonyms to start looking for good quality information which is relevant to your topic (see the Working with Information chapter for more about finding information).

This is a simple example of an assessment task. It is important to look for additional information that sets the scope, parameters or guidelines. For instance, in the previous example question, noting the Australian context is important for your research process so that you only search for and include relevant information. This will keep you on track and avoid investing your time on information you don’t need.

Assessment style

Other things to consider from your task sheet include the style of assessment (e.g., essay, report, or presentation), word length, file type and size, number, type, and recency of the references required. The style of assessment tells you what sort of document you need to complete and the sections that you may need to include (e.g., an essay would normally include an introduction, body paragraphs, and a conclusion, whereas a presentation may require a recorded slide presentation).

Written assessment word count

The word count, or word length, is important as it indicates the number of words required to adequately address the assessment task. If you have written significantly less than the word count, then you have probably not covered the topic fully. If you are significantly over the word limit, then you have included too much information or may need to review your work to ensure your writing is clear and concise. (See the English Language Foundations chapter for more information about writing concisely). The section below shows what to consider when calculating your word count.

Word count considerations

Things to check regarding the word count include:

  • Is there a ‘firm’ maximum or minimum word limit (e.g., no more than 1500 words) or a word count range (e.g., 1350 – 1650 words or 1500 words +/- 10%)?
  • What is the penalty for going over the word limit (sometimes markers will only review your writing up to the word limit, or you may lose marks for going over the limit)?
  • Does the word count include in-text citations and the reference list?
  • Does the word count include figures, tables, and appendices?

How many and what kind of references do you need to include in your assignments?

The number, type, and recency of references refers to the sources of information that you cite in your work. For example, you may need to include citations from at least five (number) peer-reviewed sources (type) published in the last eight years (recency). The chapter Working with Information provides guidance about finding, evaluating, and managing sources of information.

Marking Rubric

The marking rubric provides an overview of each marked component of the assessment task and can be helpful in the planning, writing, and reviewing phase of your assessment process. It may also be called a marking sheet, criteria sheet or scoring sheet. The marking rubric will help you to understand more precisely what your marker will be looking for when marking your assignment. Table 18.2 shows an extract from a marking criteria sheet, where the assessment task was to write an essay outlining typical and atypical development for a chosen developmental period.

Table 18.2 Extract of a marking rubric. Used with permission from Course Examiner, Mark Oliver, University of Southern Queensland.

The marking rubric provides a summary of the requirements that you would need to meet to obtain a particular mark or grade for each component of the assessment. In the example in Table 18.2 , the requirement for a mark of between 9 and 10 (out of 10) for the first criteria is: “ Demonstrates a sophisticated understanding of the physical, cognitive, and psycho-social of the identified developmental period. There are no gaps or misunderstandings.” This indicates that you would need to demonstrate a comprehensive understanding with your discussion of the physical, cognitive, and psycho-social aspects of development for the nominated developmental period to receive a mark of 9 or higher.

The marking rubric is also useful when you are reviewing your work prior to submission. You can use the rubric as a checklist to ensure that you have included all the key pieces of information. The total marks allocated for each criterion is useful as a guide to their relative importance in the assessment task. The section on Feedback in this chapter also discusses how you can use the feedback from your marked rubric.

Assessment Weighting

The weighting of an assessment item refers to the amount, or proportion, that each assessment mark contributes to your final grade. You are likely to have multiple assessment items each contributing to the final grade for your subject. In the example shown in Table 18.3 , the essay contributes to 25% of the final grade for the subject. The essay may be given a total possible mark of 100, which is then converted to a mark out of 25 for the subject. For example, if you receive 80 marks out of 100 for the essay, this will contribute 20 marks towards your final grade for the semester (80 /100 X 25%).

Table 18.3 Relative weightings of assessment items for an example subject

Graph of assessment weights

Why is the weighting important? Apart from indicating how your final grade is determined, the weighting also indicates the relative contribution that each assessment piece makes to your final grade. This allows you to allocate more time to the assessment items that will impact your grade the most. For example, you don’t want to spend 80 hours preparing the annotated bibliography, which contributes 5% towards the final grade, if that will only leave you with five hours to prepare your presentation, which makes up 20% of your final grade for the course. The final grade or score for your course is determined using the marks you receive on each of your contributing assessment items.

Once you have understood the requirements of your assessment task, writing a plan will assist you to:

  • Break the task into manageable sections
  • Keep your assessment study time focused on what you need to be achieving
  • Keep to your word limit

Example of concept map

The chapter Time Management provides useful information about breaking larger tasks into smaller ones and managing your study time. Your time management strategy and study schedule should include time for finding relevant information, writing, reviewing, proofreading, editing your work and submitting your assessment. To begin planning your response, brainstorm key ideas. Next, try to organise these ideas using a concept map, table, or other visual organiser. This gives you an overview of the direction of your ideas. Figure 18.4 is an example of a concept map for the assignment example question provided earlier in the task analysis section in this chapter. From your concept map you can then determine the structure of your assessment, including definitions and terms you will need to include in your introduction.  You can use your concept map to decide on the key points which will become the basis of your topic sentences (the main point) for each paragraph. Using a concept map to plan your structure will make your writing more coherent.

Finding information

You are usually going to need to find credible information once you understand the requirements of the assessment task. The chapter Working with Information provides guidance on finding and managing information. Remember to keep the reference information with any notes that you are taking, so you can appropriately cite this information.

Aboriginal art in shape of winding path, representing the search for information.

Writing your assignment

You are now ready to write your assignment. Remember to refer to your original assignment plan, including which key points you will be making. You may need to revise your plan after you have found and read information related to the topic, for example if you found information about a new key point that you would like to make.  This is where your plan becomes extremely useful. When you are writing, focus on writing each planned paragraph or topic. This will help you to concentrate on what is required for the task and prevent you from going off track.

The English Language Foundations chapter pr ovides information about enhancing your writing and the Types of Assignments chapter provides additional information about writing specific sections and types of assignments. If you are preparing a presentation, then the Presentations chapter will be a useful guide.

Revising and reviewing

Editing and prooreading

Allocate time for revising and reviewing your work before submitting. This allows you to find and fix small errors that could lose you marks. It is also an opportunity to review your entire document to ensure that your ideas are fully explained and linked to the assessment task. If possible, leave your writing for a couple of days before you start reviewing, editing and proofreading. This will allow you to see your work with ‘fresh eyes’ and you will be more likely to detect errors and inconsistencies. Review each of your sentences (e.g., correct punctuation, length, and spelling), paragraphs (e.g., clear topic sentences and credible appropriately cited evidence), and the document structure. You can also use the information from the English Language Foundations chapter and the editing and proofreading checklist tables in the Writing Assignments chapter as guides when reviewing your writing.

The marking rubric and task sheet can also be used as checklists to ensure that you have covered all the key aspects required for the assessment task. For example, check that you have:

  • used the correct referencing style
  • cited the required number of credible sources (if specified)
  • provided information on all aspects of the assignment task
  • used the required headings and formatting
  • written within the required word count

Consider any previous feedback received on similar tasks to see if there are areas for improvement. For example, you may have received feedback that your writing lacks flow, therefore check how you have transitioned from one paragraph to another. (For more information refer to Table 5.3 transition words and phrases in the English Language Foundations chapter). The English Language Foundations chapter also provides useful information about enhancing your writing and the chapters Writing Assignments and Types of Assignments provides additional information about writing specific sections and types of assignments.

Person jumping in the air

Don’t forget to submit your assignment. Check that your submission has been accepted if you are submitting online and keep a copy of any assessments once submitted. Plan to reward yourself and acknowledge your achievement once you have submitted. Choose a reward that’s meaningful to you.

What is feedback?

Feedback is information about how well you have performed a task. In tertiary education, it is a key tool that is used to promote student development. Understanding how to engage with your feedback and why it is important, is critical to your learning both at university and later in the workplace.

Forms of feedback

There are different forms of assessment related feedback that you may receive. Feedback can be given informally and verbally in class, within study groups, practical settings, or in conversations with lecturers or peers. Make sure you listen for feedback on your performance in these communications. Feedback is also regularly given informally and formally in written form. This may be comments on an online discussion forum, class activity, an assessment piece from your lecturer and in some cases from other students.

Preparing for feedback

Megaphone and speech bubbles with the word feedback.

At university, staff and your peers provide you with verbal and written feedback to help you learn and develop. However, this feedback can only be effective if you are prepared to receive it. In practice this means you need to do the following:

  • Keep an open mind: The person who provides you with feedback may be critical of your work. This feedback is an important feature of learning at university and it is meant to be constructive, not personal. Be prepared to consider it.
  • Be reflective: To ensure that you are ready to use feedback, adopt a reflective mindset. This means reading or listening to comments and thinking about how you may use this information to improve your work.
  • Get ready to change: Feedback is only effective if it is used. This means that you need to be prepared to act or change how you perform a task or engage in an activity in response to feedback. If you are unwilling to make changes, you limit the positive impact feedback can have.

Engaging with feedback

Once you have received feedback in class, online, in practicums or on a piece of assessment, you need to engage and act on it. This means you need to allow yourself time to review, think about, clarify, and apply feedback to your current and future work. As engaging with your feedback is part of the learning process, use your feedback to:

  • Improve your work and/or practice
  • Develop your skills
  • Improve your marks

If you don’t consider your feedback, then you may continue to lose marks or make the same mistakes in future assessments or tasks. Feedback is an ongoing process and includes:

  • Constructive feedback on areas for improvement during your studies
  • Feedback about your work. It is not about you as a person (try not to take it personally).

You need to make a conscious effort to change your work where necessary to facilitate your growth, development and learning.

Hints and tips for using feedback

There are several strategies you can utilise to take advantage of your feedback. Remember, sometimes it helps to read through your feedback, then leave it for a while before engaging with it. Consider:

  • What you did well?
  • What can you improve for your next assessment?
  • What information or support will you need to develop those skills?
  • What did I get partial or no marks for? These are areas for improvement.

Marked marking rubric

In the example above, a mark of 8 / 10 has been allocated for the assignment criteria of Literacy and written communication skills (all sections) . This mark indicates that there are some minor errors in vocabulary, grammar, punctuation, word choice, spelling, and/or organisation. However, the writing was generally purposeful and clearly conveyed the main points. Consider the difference between the criteria for the mark received and the criteria for full marks. In this example, additional time editing and proofreading may be required to improve (see the chapters English Language Foundations and Writing Assignments for more information about editing and proofreading).

Dealing with negative feedback

We all make mistakes and have areas for improvement. Therefore, try not to be too hard on yourself. Put your feedback in perspective and remember, it’s not personal. Importantly, use your feedback to your advantage and learn from it. Last but not least, ask for assistance, there are resources and people available to help you engage with your feedback and improve your skills. Seek clarification from your lecturer or tutor if you don’t understand your assessment feedback.

Did you know?

  • Markers are people too, and sometimes they make mistakes.
  • Most errors are discovered in the moderation process, but occasionally a marking error slips through.
  • If that has happened to you, then respectfully contact the course examiner to raise the error. Be clear and factual.
  • Don’t rush to ring the marker while you are angry or upset. Make sure your contact is respectful.
  • Allow appropriate time for your marks to be reviewed.
  • Being close to the next grade, or usually getting better marks, are not examples of errors in marking.

Academic Integrity

Academic integrity includes, but is more than, correctly acknowledging the sources of any information that you use in your assignments (as discussed in the chapter Integrity at University ). Academic integrity also involves engaging in behaviours and actions that are honest, respectful, and ethical.

According to the Tertiary Education Quality and Standards Agency (TEQSA; 2022), academic integrity is:

the expectation that teachers, students, researchers and all members of the academic community act with: honesty, trust, fairness, respect and responsibility.

Why is academic integrity important? Demonstrating academic integrity shows that you are honest, trustworthy, and responsible, each of which are critical behaviours for your future professional roles. Demonstrating academic integrity also means that you are not engaging in academic misconduct. Academic misconduct occurs when an action or behaviour is not consistent with academic integrity. Examples of actions or behaviours that represent academic misconduct include not attributing the work of other (plagiarising), working with other students to write and use an assignment (collusion), asking or paying others to complete your assignment for you or inappropriate use of artificial intelligence (contract cheating). It is recommended that you check with your university guidelines regarding their policy on academic integrity and the use of artificial intelligence. Most universities have formal processes to investigate academic misconduct and there are a range of penalties that may be applied when academic misconduct has occurred. Criteria related to academic integrity may also contribute to your overall mark for an assessment piece. For example, the marking rubric extract shown in Figure 18.10 shows that a total of five marks is allocated to the application of referencing.  The Working with Information chapter has additional information about appropriately acknowledging the source of information, including in-text citations and referencing.

Marked marking rubric

The purpose of this section is to highlight the importance of academic integrity and to provide you with information about appropriate practices and approaches. Each of the chapters in this book provides useful information to assist you in adopting practices that are consistent with academic integrity. (See the chapter Integrity at University for more information on academic integrity).

In this chapter we described some of the assessment information that you will receive in your courses. We discussed the importance of planning your assessment preparation and the steps involved. We examined the importance of understanding and applying feedback to improve your assessment and finished with an overview of academic integrity. It can be tempting to skip over reading the assessment information and planning stages, however these steps will save you time overall and contribute to your academic success.

  • Your task sheet and marking rubric provide key information about your assessment task.
  • Developing a plan for the assessment task will help you to keep on track.
  • Allow time for reviewing and editing your work before submitting.
  • Reward yourself for submitting your assessment.
  • Understanding and engaging with your feedback is critical for success, and feedback is not intended to be personal.
  • Academic integrity involves engaging in behaviours and actions that are honest, respectful, and ethical.
  • Check your university’s academic integrity policies regarding the use of artificial intelligence to ensure that you don’t engage in academic misconduct.

Tertiary Education Quality and Standards Agency. (2022). Defining academic integrity . https://www.teqsa.gov.au/students/understanding-academic-integrity/what-academic-integrity

Academic Success Copyright © 2021 by Cristy Bartlett; Kate Derrington; and Anbarasu Thangavelu is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

Share This Book

IMAGES

  1. Rubrics and criteria in Canvas

    assignment assessment criteria

  2. Understanding marking rubrics

    assignment assessment criteria

  3. Assessing writing assignments: criteria and approaches

    assignment assessment criteria

  4. Assessment/Marking Criteria Creator

    assignment assessment criteria

  5. Understanding assessment criteria and applying feedback

    assignment assessment criteria

  6. ASSESSMENT CRITERIA

    assignment assessment criteria

VIDEO

  1. A3

  2. Unit 12 Signature Assignment Assessment of Motivation and Interventions in Learning

  3. How to Do Any Writing Assignment

  4. criteria for selection of assessment technique

  5. Correction to Week 2 Assignment/Assessment

  6. Assignment & assessment help

COMMENTS

  1. Step 4: Develop Assessment Criteria and Rubrics

    Step 4: Develop Assessment Criteria and Rubrics. Just as we align assessments with the course learning objectives, we also align the grading criteria for each assessment with the goals of that unit of content or practice, especially for assignments than cannot be graded through automation the way that multiple-choice tests can.

  2. Assessment Criteria and Rubrics

    Assessment Rubric. The assessment rubric, forms part of a set of criteria and refers specifically to the "levels of performance quality on the criteria." (Brookhart & Chen, 2015, p. 343) Generally, rubrics are categorised into two categories, holistic and or analytic. A holistic rubric assesses an assignment as a whole and is not broken ...

  3. Assessment Rubrics

    Assessment Rubrics. A rubric is commonly defined as a tool that articulates the expectations for an assignment by listing criteria, and for each criteria, describing levels of quality (Andrade, 2000; Arter & Chappuis, 2007; Stiggins, 2001). Criteria are used in determining the level at which student work meets expectations.

  4. PDF Guidelines for Writing Effective Assessment Criteria

    Before writing assessment criteria it is important to understand how assessment criteria relate to course design. The following diagram illustrates how assessment criteria both inform and are informed by learning outcomes. Figure 1: Planning a unit of study (adapted from Rosie Bingham, 2002) Writing Effective Assessment Criteria PREPARING TO ...

  5. Rubric Best Practices, Examples, and Templates

    A rubric is a scoring tool that identifies the different criteria relevant to an assignment, assessment, or learning outcome and states the possible levels of achievement in a specific, clear, and objective way. Use rubrics to assess project-based student work including essays, group projects, creative endeavors, and oral presentations.

  6. Center for Teaching & Learning

    Rubrics are a set of criteria to evaluate performance on an assignment or assessment. Rubrics can communicate expectations regarding the quality of work to students and provide a standardized framework for instructors to assess work. Rubrics can be used for both formative and summative assessment. They are also crucial in encouraging self ...

  7. How to write assessment criteria

    Steps to defining assessment criteria. 5.1 Start with the course learning Outcomes. The criteria should cover all the course learning outcomes and there is a very close relationship between them and the assessment criteria - many of the words will be repeated. The difference is that the assessment criteria describe the level of performance ...

  8. Creating and Using Rubrics

    A rubric is a scoring tool that explicitly describes the instructor's performance expectations for an assignment or piece of work. A rubric identifies: criteria: the aspects of performance (e.g., argument, evidence, clarity) that will be assessed descriptors: the characteristics associated with ...

  9. Developing Assessment Criteria • Southwestern University

    Spending fifteen or twenty minutes as a class discussing and establishing the important criteria for grading can certainly build student involvement in an assignment and help build a sense of fairness in evaluation. Sources: "Creating and Using Rubrics." The Assessment Office. The University of Hawaii at Mānoa.

  10. Writing Assessment Criteria and Rubrics

    An assessment rubric is a more detailed and structured tool that breaks down the assessment criteria into different levels or categories of performance, typically ranging from poor to excellent, on a percentage scale. They can make a markers life easier by ensuring consistency and objectivity in grading or evaluation, provide a clear framework ...

  11. Rubrics for Assessment

    Criteria identify the trait, feature or dimension which is to be measured and include a definition and example to clarify the meaning of each trait being assessed. Each assignment or performance will determine the number of criteria to be scored. Criteria are derived from assignments, checklists, grading sheets or colleagues.

  12. Designing Assessments of Student Learning

    In-class activities that ask them to grade sample assignments and discuss the criteria they used, compare exemplars and non-exemplars, engage in self- or peer-evaluation, or complete steps of the assignment when you are present to give feedback can all support student success. ... Assessment & Evaluation in Higher Education, 38(4). 477-491 ...

  13. Guidelines for Assessing Student Learning

    Value the learning process rather than only the final assignment by allocating marks to students' self-analysis of their learning. 3. Give students the opportunity to practice the skills they need for each assessment. Clearly explain the assessment criteria. Give feedback on formative work. Discuss the assessment task with students.

  14. Aligning Assessment and Measuring Through Rubrics

    Most commonly, a grading rubric is designed in a matrix or grid-type structure and includes grading criteria, levels of performance, the range of possible scores, and descriptors that serve as unique assessment tools for a specific assignment/assessment. There are different types of rubrics (e.g., checklists, rating scales, and analytic), each ...

  15. Assessment & feedback

    Feedback comes in many forms: written comments about your work. highlighting sections of the assessment criteria. verbally in an audio or video clip, or in a conversation. model answers to compare your work to. advice on common issues (especially for formative work) Feedback is an opportunity to develop and improve.

  16. Creating and Using Rubrics for Assignments

    As an assessment tool, a rubric sets the criteria for evaluating performance or work completed in a course or program. A rubric can communicate the expectations for learning and provide a framework for instructors to make decisions about instruction. Rubrics are used for both formative assessment (in-process feedback to be used for improvement ...

  17. Assessing assignments

    This can include self-assessment, peer-assessment and assessment performed by staff. In 'for completion' assessment, the only criteria for assessment is whether or not the task has been completed; the quality of completion is not judged. Assignment. Refers to any task completed outside of contact hours such as reflective projects, essays ...

  18. Designing Assignments for Learning

    An authentic assessment provides opportunities for students to practice, consult resources, learn from feedback, and refine their performances and products accordingly (Wiggins 1990, 1998, 2014). Authentic assignments ask students to "do" the subject with an audience in mind and apply their learning in a new situation.

  19. PDF Measuring Transparency: A Learning-focused Assignment Rubric

    Criteria/ assessment The criteria describe what excellence looks like and allow students to effectively self-evaluate. 10. The criteria by which the assignment will be assessed are indicated.*** • These criteria may appear in the form of a checklist, rubric, or textual descriptions. 11. The criteria specify characteristics that

  20. Developing grading rubrics/assessment criteria for multimedia

    Establishing a clear drafting, review, and revision process, similar to more traditional writing assignments, can not only lead to better outcomes, but can also provide more consistent and systematic criteria for assessment. Many of these multimedia or digital assignments, however, are by their nature more intimately tied to the medium.

  21. Assessment criteria

    Assessment criteria. Assessment criteria are critical when posing a reflective assignment. This page provides example criteria as well as questions to help you identify what you are looking for. Criteria are your best ally in order to ensure both you and your students know exactly what you want from your reflective assignment or activity.

  22. How to use assessment criteria

    Assessment criteria may be used in three ways. They may specify: Threshold standards: Description of what the learner is expected to do, in order to demonstrate satisfactory achievement of learning outcomes. Grading criteria: what is required for achievement of each of the grades being awarded e.g., for a pass at 50%, for grades of 60%, 70%, 80 ...

  23. Managing Assessment

    Table 18.2 shows an extract from a marking criteria sheet, where the assessment task was to write an essay outlining typical and atypical development for a chosen developmental period. ... In the example above, a mark of 8 / 10 has been allocated for the assignment criteria of Literacy and written communication skills ...