Rubric Repair: 5 Changes that Get Results

By Jennifer Gonzalez. This post was first published on The Cult of Pedagogy, you can listen to the podcast episode with Mark Wise here.

In my 20-year career as an administrator, I’ve had the opportunity to be a fly on the wall in hundreds of classrooms across grade levels and content areas. While much has changed in education over those years, one element has remained constant: the well-intentioned use of rubrics with varying levels of success.

Starting with their use as early as kindergarten and continuing through high school, rubrics are meant to clarify expectations, but poor design can make the experience anything but clear. Rows of criteria describe, often in 10-point font, how students will be graded for an upcoming project. Students attempt to decipher the details, review the exemplars provided, and note the corresponding due dates, but the information doesn’t always translate into action.

This is unfortunate because a well-designed rubric can be more than just an evaluation tool. For the teacher, it can clarify expectations, which increases the likelihood that these traits will be attended to during instruction and as a result, can provide more targeted feedback. For the learner, knowing what is expected from the start along with clear indicators of progress provides an effective means to self-assess, make adjustments, and improve the quality of performance.

So how do we get there? How do we take our current rubrics and fine-tune them so they deliver on those promises? These five guidelines will help.


Teachers often include multiple criteria in their rubrics without adding the primary learning outcome they are most interested in: Did it persuade? Did it create an emotional connection? Did you win your argument? We tend to construct our rubrics with important but sometimes peripheral components of performance because they are easy to see, count, or score (Wiggins and McTighe, p. 34). For example, if your students are writing an editorial, it could be stylistically sophisticated, well-organized, and meet the length requirement. However, an editorial, regardless of how well written, is wholly ineffective if it fails to persuade the reader.

Moreover, since students can become overwhelmed by the sheer number of criteria they are required to meet, less can be more. One way to accomplish this is to utilize a single-point rubric, which allows the student to focus on the stated expectations while receiving feedback on the degree to which they are meeting them. If you are using a more traditional 4-6 point rubric that details the continuum of performance, it is even more incumbent upon you to identify the main reason students are engaged in the task in the first place and edit down the criteria so it incentivizes students to focus on the most important aspects of the performance.

The “before” example below shows a rubric that measures a lot of things that don’t have anything to do with whether a student understands the lunar phases: things like creativity, attractiveness, and whether the task is completed on time. If the purpose of the assignment is to assess whether students have really learned the lunar phases, the rubric should focus primarily on whether the content in the model is accurate and effective.

In the revised version below, all of the criteria is focused on the quality and accuracy of the information in the model and measures the desired scientific thinking from building the model in the first place.


As designers of rubrics, we can signal to students that certain criteria matter more than others. Just because a rubric has four criteria, doesn’t mean that each needs to be worth 25 percent of the score. With the weight of each criterion adjusted, the rubric itself guides students to focus on what is most important.

In the example below—which is the same revised rubric from above, but where the teacher wanted to include some accountability for mechanics—all four criteria are weighted exactly the same. This means a student who demonstrates a perfect understanding of the science behind lunar phases, but who struggles with spelling and punctuation, could end up with a C on the project. That would not be a true reflection of mastery.

By contrast, in the revised rubric below, “Mechanics” is only assigned 10 percent of the overall grade, while the other three criteria make up 90 percent combined. This way, the final grade will be a much more reliable measure of student understanding of lunar phases.

Another thing to keep in mind is that each criterion doesn’t have to be graded every time or the same way. Within the structure of the rubric, teachers have a great deal of flexibility. Although all of the criteria are important, we can use our discretion by grading only certain items, then attending to the others when those skills or concepts are formally taught and practiced. Likewise, since our expectations of what students can accomplish at the beginning of the year are quite different from what we expect at the end of the year, we can continue to adjust the grade or point values we assign to each column.


Your point values for each column need to yield an accurate reflection of the student’s performance. For example, if on a 4-point rubric the “3” is “meets expectations” most teachers believe the point value should reflect a range from A- to B+. This stands in contrast to a C (3 out of 4 points) which is what would be earned if a student met expectations for the criteria within that column. This can result in either the teacher giving an unfair grade or altering the feedback in order to generate the desired grade. Either result isn’t helpful for the student.

A design tip is to look at each column vertically and choose a number or range that would be appropriate for a student scoring exclusively within that column. The corresponding grades don’t have to reflect a neat 4, 3, 2 progression. Often using decimals or a range of numbers is necessary to align each column vertically to a grade that matches the column’s descriptors.


We need to consider the language we choose so that our rubrics encourage students to improve. Without realizing it, when teachers detail the levels of performance, we tend to use degrees of deficiency (e.g. mostly, somewhat, lacking) rather than affirmative non-judgmental statements as to what the students are capable of at each point along the continuum. If we are going to truly use the rubric as a tool to enhance students’ ability to self-assess and thus enhance their performance, we must provide clear markers along the way for how students can improve and not unintentionally send the message that their ongoing work is insufficient rather than on a path of progress.

The examples below show two rows in a rubric for a research project. The first uses deficit language to describe the lower levels of performance, while the second describes each level in terms of what the student CAN do.

Rubric adapted from Asia Society Center for Global Education.

A real-world model that clearly illustrates a learner’s progression toward achieving proficiency is the traditional swim chart which indicates where the swimmer is in on the path toward achieving independence in the deep end of the pool. The language of the swim chart describes traits—from treading water to extended front crawl—that are required for the learner to move from Tadpole (beginner) to Seal (advanced). The expectations are clear, measurable, and non-judgemental; it describes what swimmers can do rather than what they can’t. We can view the transparency and progression of the swim chart as an aspirational goal for our rubric design, being mindful to choose language that states the desired outcome, rather than anticipated problems.

Educators are already working toward this in the area of world languages. The American Council on the Teaching of Foreign Languages (ACTFL) has performance descriptors that describe the learner’s degree of communication fluency. ACTFL has also developed “Can Do” statements that describe what language learners can do consistently and independently. Like the swim chart, these indicators allow learners to use the statements for self-evaluation so they are more aware of what they know and can do in the target language. In turn, world language educators have developed rubrics that match those desired outcomes.


When we design rubrics, we tend to pore over their construction. We perseverate over the language we use (“should I say ‘clearly’… or ‘distinctly’?”), repeatedly delete rows or columns, and painstakingly choose fonts.

From the student perspective, they experience the rubric differently. After being introduced to the rubric at the start of the project, the next time students typically see it is when it is returned, replete with circled boxes, teacher comments, and a final grade attached. Understandably, many students suffer from “rubric fatigue,” a condition caused by encountering a series of disconnected rubrics across subject areas on any given day.

What students really benefit from are actual models (both exemplars and non-exemplars) that link to descriptors on the rubric that illustrate the quality of work expected. For example, in the real world we typically have many models of the performance or product we are trying to create or implement—whether it is for how a game should be played, a song should be sung, or an editorial should be written. Similarly, when launching a project, showing students multiple examples of prior student work or appropriate real-world examples makes the rubric meaningful and brings these descriptors to life.

Another strategy is to have students sort the various models in order to determine the specific qualities that make some examples stronger than others. The teacher can then incorporate the students’ language in their draft version of the rubric or highlight aspects the teacher may have initially overlooked before distributing the final rubric. This process allows the students themselves to generate the criteria and descriptive language of the desired performance which deepens their understanding and creates shared ownership of the expectations for quality work.

Similarly, the rubric can become more user-friendly if we make the criteria more visible to students. This can be done by adding hyperlinks of models tied to the descriptors so students can access a range of examples to inform where they are along the continuum and show them exactly how they can improve.


As educators, we are rightfully drawn to the goal of establishing a classroom where students have opportunities to engage in complex problem solving, participate more actively in dialogue and debate, design their own experiments, or research topics that interest them. These types of experiences—ones that do not result in a single “correct” answer or follow a formulaic procedure—require more of an open-ended assessment tool with clear guidelines for success that students can use to guide, self-assess, and ultimately improve their learning. Hopefully by following these five suggestions, your rubrics will help clarify what you really want students to take away from the experience, provide your learners with the means to get there, and allow you to fairly and honestly assess their performance.


Wiggins, G., & McTighe, J. (2012). The understanding by design guide to advanced concepts in creating and reviewing units. Alexandria, VA: ASCD.

0 0 votes
Article Rating
Notify of
Inline Feedbacks
View all comments