How do teachers determine letter grades and GPAs from standards? (Standards-Based Grading)

The purpose of standards-based grading/reporting is to communicate students’ current strengths and areas for improvement relative to course or grade-level standards. It may seem counter-productive to “go back” to using letter grades once a student’s level of learning has been described using an integer scale (i.e. 1-4) with corresponding descriptions of learning. Because some secondary schools may have a need to determine letter grades and grade point averages, the purpose of this post is to describe several ways to make this happen. The first step is to determine a standard score. The second and final step is to determine a final grade based upon the standard scores using one of three methods. In other words, a secondary school using standards-based grades does not need to fully eliminate reporting letter grades, if there is a compelling reason to do so.

Determining a standard score (level of learning)

No mathematical formulas are needed to determine a student’s standard score. Let’s consider Tyler, a student who has a “3” level of learning (demonstrates understanding with minor errors) right now in the grade book for a high school math standard, “Represent data with plots on the real number line” (HSS.ID.A.1). After working with the teacher to complete a re-learning plan, Tyler completes a new assessment on data plots. His teacher determines that he now has a “4” level of learning (demonstrates proficient understanding of the standard). Because we want to communicate Tyler’s current level of learning, the teacher would erase the 3 in the grade book and replace it with a 4 rather than averaging these two attempts.

Determining a letter grade based upon the standard scores

In my experience and observation, schools have used one of three methods when converting standards to letter grades in a standards-based grading environment. I will explain each one in detail below using the following fictitious grade book, which we’ll assume represents a student named Cassy’s level of learning in math near the end of a reporting period.

[Cassy’s level of learning in math near the end of the reporting period (sample)]

Convert to Percentages Method
In order to determine a letter grade using the convert to percentages method, use the following steps:

  • Add up all of the standard scores.
  • Divide it by the total number of standard scores possible.
  • Use the school’s typical 90%, 80%, 70%, etc. percentage scale to determine the letter grade.

Using Cassy’s math standards and levels of learning above, she currently has 34 standard scores (4+4+3+4+4+4+4+2+4+1). The total number of standard scores possible is 40 (4 scale x 10 standards). Using a typical 90, 80, 70, 60 scale, Cassy has 85% (34/40=85%), therefore her letter grade would be a B using the percentages method.

The convert to percentages method will work with many electronic grade books and likely makes sense for parents and students who are used to a points/percentages grading system. At the same time, this method has a “points and percentages” feel to it which is less-than-ideal for communicating student learning in standards-based grading.

Marzano Method

This method of determining letter grades comes from literature written by Robert Marzano and his colleagues. In order to determine a letter grade using the Marzano Method, use the following steps:

  • Average the standard scores.
  • Apply the following conversion scale
[Source: Marzano, R. J. (2010) Formative assessment & standards-based grading. Bloomington, IN: Marzano Research Laboratory.]

Using Cassy’s math standards and levels of learning, her average is 3.4 (34/10). Based upon the conversion scale above, her letter grade would be an A.

It is important to know in Marzano’s writing, “4” often describes learning exceeding the standard rather than proficient, a distinction I recommend reading about in a write-up by Dr. Thomas Guskey here. This may explain why a student with an average of 3.0 to 4.0 receives some flavor of an “A” while a student must have an average of 2.5 to 2.99 to earn some type of a “B” in this scale.

I am also not sure how many electronic grade books will determine a letter grade using the Marzano method, therefore manually overriding final letter grades may be necessary.

Logic Rule Method

In order to determine a letter grade using the logic rule method, use the following steps.

  • Determine a logic rule for your classroom / school
  • Count up the number of 4s, 3s, 2s, 1s, etc. currently in the grade book.
  • Apply the logic rule.

The following logic rule is adapted from Ken O’Connor’s book How to Grade for Learning, 4th edition. This is only a sample and could be revised (for example) to communicate plus and minus grades.

A = Student has demonstrated level 3 and level 4 understanding for all standards with a majority of 4s. No standard scores are below 3.
B = Student has demonstrated a mix of level 3 and level 4 understanding for all standards with a majority of 3s. No standard scores are below 3.
C = Student has demonstrate a mix of level 2, level 3 and level 4 understanding for all standards with a majority of 3s. No standard scores are below level 2.
D = Student has demonstrated a mix of level 2, level 3 and level 4 understanding with a majority of 2s. No standard scores are below level 2
F = Student has at least one standard score of 1 or 0.

Using Cassy’s math standards and levels of learning, she has a “1” right now on the standard “Use data from a sample survey to estimate a population mean…” therefore her letter grade is an F. At first glance, this particular logic rule may seem harsh, however it should be noted that students are provided multiple opportunities to demonstrate understanding in standards-based grading, therefore Cassy is has ample opportunity to improve based upon supports provided by the teacher.

Few, if any major electronic grade books currently can determine a letter grade using a logic rule, therefore the teacher would need to manually override the letter grade.

Calculating a grade point average using standards-based grading

Calculating a grade point average (GPA) in standards-based grading follows the local school guidelines, likely similar to when the school previously used points and percentages to determine letter grades. Once a letter grade has been determined from standard scores using one of the methods described above for each course, a grade point average can be calculated for honor roll and/or high school transcript purposes.

Closing Thoughts

Because some secondary schools may have a need to determine letter grades and grade point averages, the purpose of this post was to describe several ways to make this happen. School leaders should consider the pros and cons of each standard score to letter grade conversion method when making a local determination for their building or district.

Anticipating a Second Wave of Standards-Based Grading Implementation and Understanding the Potential Barriers: Perceptions of High School Principals

As secondary school leaders consider a shift toward standards-based grading (SBG) practices, they are no doubt weighing the odds of a successful implementation process. This research followed up on a study from 2014 to identify the challenges secondary school leaders experience when changing the currency of the classroom from points to learning. The results indicated that the game is changing and a new wave of SBG implementation is on the horizon.

This peer-reviewed article was published in the December 2019 issue of NASSP Bulletin.

Townsley, M., Buckmiller, T., & Cooper, R. (2019). Anticipating a second wave of standards-based grading implementation and understanding the potential barriers: Perceptions of high school principals. NASSP Bulletin, 103(4), 281-299.

Standards-based Grading: BIG Shift #2 – A Mastery Mindset

In standards-based grading, teachers have a mastery mindset. In other words, classroom structures and routines are setup to maximize student learning, regardless of when they learn it.

In my experience as a K-12 student, and perhaps yours as well, each unit of study took several weeks or more and ended with some type of culminating assessment (test, project, essay, speech, etc.). The level of understanding I demonstrated at the end of the unit was written in ink. There was nothing I could do to change this static mark regardless of my future level of learning. Once the doors had been shut on the unit, few, if any, opportunities existed to remediate and/or show I had a deeper understanding.

Every educator I have worked with agrees that students learn at different rates and different paces. As such, in standards-based grading, the BIG shift is thinking about learning as dynamic rather than static within a reporting period. When students have demonstrated a higher level of understanding following some type of new learning activities, marks in the grade book or report card are revised according. Because our mindset is focused on mastery, we think of learning as documented in pencil during any given quarter, trimester or semester.

In an upcoming post, I will share the next BIG shift of standards-based grading: repurposing homework and checks for understanding as ungraded practice.

Standards-Based Grading: BIG Shift #1 – Reporting Learning Rather than Tasks

In standards-based grading, teachers communicate goals of learning rather than tasks. In other words, learning is communicated in relation to the course outcomes rather than the activities (homework, quiz, project, essay, etc.) demonstrating the learning outcomes.

For many years in education, this has been the default means of communication to students and parents:

However, 14 out of 16 points does not tell John or his parents the areas in which he has successfully learned the course outcomes and the areas in which John still needs to improve.

In standards-based grading, the BIG shift is seeing learning outcomes (often called “standards” in K-12 schools) reported in grade books and/or report cards.

In an upcoming post, I will share the next BIG shift of standards-based grading: a mastery learning mindset.

Leaders of Performance: Planning with the End in Mind

[Note to readers: This column is part of an ongoing series for Iowa ASCD’s The Source e-newsletter.]

Leaders of Performance: Planning with the End in Mind

What does it mean to be a curriculum lead?  This is the sixth column in a series for Iowa administrators, teacher leaders and anyone else interested in enhancing curriculum leadership.  Over the past year or so, we’ve discussed the work of curriculum, instruction, and assessment; data analysis, processes, professional development, and relationship building.  This week, we’ll be taking a closer look at what it means to be a leader of performance.  Future columns will consider the remaining two facets of curriculum leadership: operations, and change.

According to the functions of our work, curriculum leads, “…model, expect, monitor, and evaluate continuous learning of all students and staff members.”  Monitoring and evaluation matters! One way of thinking about this is that curriculum leaders ought to be thinking about what the desired outcomes are when taking on a new program or initiative.  Any educator who has been around a while knows our profession is pretty good at trying new things and/or renaming old ones.  First, it was “instructional decision making,” then it was renamed “response to intervention” and now we call it “multi-tiered system of supports.”  Last year, our district focus may have been project-based learning and this year it is creating profiles of a graduate.  Twenty years ago, we were writing standards and benchmarks from scratch (or let’s be honest, borrowing them from the district next door!), and now we’re digging into the latest iteration of the state’s content standards.  I don’t mean to sound cynical in making these observations, but instead to suggest how much time is allocated towards improving schools.  We can and should be working towards a culture of continuous improvement.  This sometimes means dropping old ideas that do not work and/or trying out new ones.  As curriculum leaders, our role is to model and evaluate these changes.  As such, the purpose of this column is to suggest several practical ways leaders can evaluate and monitor change rather than starting and stopping them without attention towards fidelity of implementation.    

One way to think about monitoring and evaluation is to make connections with quality curriculum, instruction and assessment practices at the classroom level.  In the Understanding by Design framework, McTighe and Wiggins suggest “effective curriculum is planned backward from long-term, desired results through a three-stage design process (Desired Results, Evidence, and Learning Plan).”  In other words, classroom teachers should consider where they want students to be at the end of a unit, course or reporting period and plan backwards.  The same concept can and should apply to any major building or district wide curriculum change.  Curriculum leaders in tune with the performance function should first consider what success could looks like at the end of the school year.  Unfortunately, I was guilty on more than one occasion of launching a new idea without being able to articulate what “success” would look like as the year progresses. 

Instructional coaches and others in curriculum leaders roles are in an excellent position to ask each other questions such as:

  • “If we implemented this change with fidelity, what would our teachers be doing differently in May when compared to September?” and
  • “If we excelled at this change, what would our students be doing differently in May when compared to a year ago?” 

Too often, we’re guilty of providing a wonderful splash event kicking off the school year which encourages all staff to think more deeply about grading practices, social emotional learning or trauma-sensitive schools.  We might even follow it up by seeking feedback from teachers on their perceptions of the August workshop and using this information to plan a follow-up in October.  An alternative approach might be to cast a tangible vision of what the staff and students would be doing differently six months later, and regularly providing staff with feedback along a continuum as they work towards this ideal state. 

One such tool used to monitor and evaluate an educational change is an innovation configuration map.  In case this is a new concept to you, “An IC Map specifies behaviors and expectations related to implementing a curriculum, intervention, or evidence-based practice and categorizes these behaviors on a spectrum from ideal to less than ideal” (REL).  An excerpt from one of Central Rivers AEA’s innovation configuration maps on number talks is included below.

Note how a teacher could self-assess to determine the level in which he/she has an environment conducive to number talks.  Additional components of the unabridged innovation configuration map for math talks include teacher role in student discourse; teacher questioning; teacher notation; academic language; and instructional time, to name a few.  Through the use of this tool, a teacher sees that he/she should be working towards a new type of seating, using student hand signals, implementing specific questioning techniques, and utilizing math problems with increased rigor to make connections with previous learning.  Although the August workshop on math talks may provide an overview of what math talks are and are not, an innovation configuration map can give all educators a description of the desired state.  Similarly, curriculum leaders may ask teachers to self-assess their current progress along the IC map.  This information could be used to plan next steps in professional learning and to celebrate interim progress.  At the end of one or more years (assuming countless hours of professional learning, coaching and support, of course!), elementary teachers may be expected to be fully implementing math talks in their classrooms.  The innovation configuration map provides a visual of what this change (math talks) looks like in the end, when implemented with fidelity. 

Our role as curriculum leaders is to monitor and evaluate the changes initiated by the department of education or local administration.  We owe it to our colleagues to show them what success looks like early and often.  An innovation configuration map is one possible tool to assist in this quest towards comprehensively monitoring and evaluating professional development.  Dr. Tom Guskey’s five level professional development evaluation framework suggests educators’ use of new knowledge and skills takes time and as such, requires ongoing evaluation:

“…Did the new knowledge and skills that participants learned make a difference in their professional practice? The key to gathering relevant information at this level rests in specifying clear indicators of both the degree and the quality of implementation. Unlike Levels 1 and 2, this information cannot be gathered at the end of a professional development session. Enough time must pass to allow participants to adapt the new ideas and practices to their settings. Because implementation is often a gradual and uneven process, you may also need to measure progress at several time intervals.”

Guskey, 2002

Implementing a new instructional practice takes time.  I suggest that curriculum leaders move away from “spray and pray” professional learning in which we hope scattering snippets of training and support will somehow find their way into all classroom.  If blended learning is the professional learning focus of the year (or the next few years), curriculum leaders should begin with the end in mind, share the success criteria, and provide regular support to help everyone achieve the intended results.   If collaborative teams within a professional learning philosophy are the improvement focus in 2019-20, all teachers should know what an effective teaming environment looks like, does not look like, as well as where they’re at along the implementation continuum. 

In closing, curriculum leaders who value performance know that monitoring and evaluation matters

Resources to further learning as a leader of performance:

  • Understanding by Design Guide Set by Wiggins and McTighe (2014, ASCD)
  • Evaluating Professional Development by Guskey (1999, Corwin)

Standards-based grading: Lock-ins and lecture halls

(a.k.a. why schools choose to adopt educational practices such as grading that are often different from colleges/universities)

As we embarked in my previous high school upon making our grade book more reflective of what students are expected to learn, a question from parents (and sometimes students or teachers) often came up: “If colleges/universities are continuing to grade using points and percentages, why are we changing to standards-based grading?”

No doubt this is an important question and comes from a mindset of “I want to make sure my child will be prepared to succeed in college.”

Here’s the thing — there are a TON of ways the high school experience does not mimic colleges and universities.

Lecture halls

I graduated from a small private college in Iowa. The class sizes I experienced were anywhere from 4 students (secondary math methods) to around 75 students (an introduction to psychology course). However, some of my friends who attended larger institutions of higher learning shared with me they had classes with 100+ students! A little investigation confirmed introductory biology classes, for example, suggests it is the norm for some colleges/universities to teach over one hundred students in lecture halls designed to accommodate as many as 300 learners. In the spirit of “preparing our kids for college,” one line of thinking might suggest that we should significantly increase high school class size in the sciences as well as deliver content primarily in a lecture format. Well, inquire with pretty much ANY high school science teacher and they’ll tell you smaller class sizes are desirable in order to personalize learning. In other words, it would be silly to mimic the practice of higher education regarding class size.


In my undergraduate days, I lived on campus for all four years. Moving out of mom and dad’s basement into the dorms was an adjustment for me as I’m sure it is for many freshmen. In this “we must prepare our students for college” line of thinking, we might also begin having weekend-long lock ins at the high school or middle school level to assist in the transition towards residential college life. Again, I suggest talking with any high school teacher or principal about the reasons lock-ins do NOT regularly happen with 16, 17 and 18 year olds. In other words, it would be silly to mimic the practice of higher education in this regard as well.

In summary, the purpose of this brief essay was to point out high schools specifically CHOOSE to do things they know are developmentally appropriate and in line with quality educational practices. When it comes to grading, it is our moral imperative to better communicate students’ current levels of learning and provide students with multiple opportunities to demonstrate their understanding, a few of the major tenets of standards-based grading.

Considering standards-based Grading: Challenges for Secondary School Leaders

Rather than awarding points for a combination of worksheet completion, quiz performance, in-class participation, and essay writing, standards-based grading separates academics from nonprogress towards mastery of course or grade-level standards. Some secondary schools are moving towards standards-based grading (SBG) in an attempt to produce more consistent grading practices, however the empirical evidence resulting from this change is mixed. The purpose of this article is to describe principles of standards-based grading, empirical support of SBG, and several common challenges secondary school leaders may face when considering this philosophical shift. Future research recommendations include exploring the perspectives of college students who graduate from high schools using SBG to understand the longer-term successes and shortcomings of the grading system.

This peer-reviewed article was published in the Summer 2019 issue of the Journal of School Administration Research and Development and is available online.

Townsley, M. (2019). Considering standards-based grading: Challenges for school leaders. Journal of School Administration Research and Development, 4(1), 35-38

Too often, standards-based grading is not the problem (or the solution).

At a few recent workshops I have facilitated, well-intentioned teachers submitted the following questions:

  • How do we hold students accountable for homework?
  • What do we do with students who do not want to reassess?

I was delighted to share my personal experience as a teacher and district administrator involved with standards-based grading, however in each case, I started with the following caveat:

That’s a really great question!
Let’s be honest with ourselves though for a moment: these are motivation issues educators are grappling with regardless of their grading system. In other words, standards-based grading is not the problem OR the solution to motivating struggling learners.

For example, ask any secondary teacher using points and percentages in their classroom and they’ll share their struggles motivating some students to complete homework assignments. When homework is repurposed as ungraded practice in a standards-based grading classroom, there’s a temptation for students to not complete it. In both grading systems, our task as educators is to motivate struggling learners. There’s no quick and easy step-by-step answer!

Standards-based grading does often provide educators (and parents) with better information, which in turn can cause us to raise these type of questions with an increased level of concern. That’s a good thing though, right?!

When a few teachers caught me during a break in the workshop, they confirmed they were in favor of moving forward with standards-based grading at their school, however it is possible other teachers may not have this same mindset. If student motivation is used a reason for not moving towards standards-based grading, it may be helpful to remember that often standards-based grading is not the problem or the solution.

Walking the talk: Embedding standards-based grading in an educational leadership course

The purpose of this recently published paper is to provide a model for educational leadership faculty who aspire to walk the talk of effective feedback by embedding standards-based grading (SBG) in their courses. Rather than focusing on learning, points are the currency of K-12 classrooms across the country. Over 100 years of grading research suggests typical grading practices are subjective at best. Some schools are responding by implementing SBG, yet few articles describe how higher education embeds this philosophy in educator preparation coursework. In this essay, the author documents how to design assessments, align rubrics, and provide feedback to aspiring school leaders in line with three tenets of SBG.

This peer-reviewed journal article is available for download here.

Townsley, M. (2019). Walking the talk: Embedding standards-based grading in an educational leadership course. Journal of Research Initiatives, 4(2). Retrieved from


Leaders of Relationship Building: Listening is a part of leading

[Note to readers: This column is part of an ongoing series for Iowa ASCD’sThe Source e-newsletter.]

Leaders of Relationship Building: Listening is a part of leading

What does it mean to be a curriculum lead? This is the fifth column in a series for Iowa administrators, teacher leaders and anyone else interested in enhancing curriculum leadership. So far, we’ve discussed the work of curriculum, instruction, and assessment; data analysis, processes, and professional development. This week, we’ll be taking a closer look at what it means to be a leader of relationship building. Future columns will consider the remaining facets of curriculum leadership: performance, operations, and change.

According to the functions of our work, curriculum leads, “Seek first to understand and then to be understood.” Listening matters! One way of thinking about this is that administrators ought to be the least vocal educators in the room. Sure, the central office and principals comes with a TON of positional authority, but this doesn’t mean administrators ought to be flexing their muscles with top-down conversations at every meeting. I am not suggesting grass-roots, plan-every detail during the meeting based upon a vote leadership is required either. In fact, I was advised early in my administrative career that time is limited, therefore it is more efficient to critique than create in most meetings. Some of the most effective administrators I have observed come to a meeting with a tentative plan for moving forward and ask those in attendance to provide their input. During the meeting, the team often revises, re-orders or re-prioritizes the next steps. Other times, administrators bring more than one option for the team to consider and seeks their input on the most feasible option. In both cases, listening to others is a high priority.

At one leadership team meeting comprised of administrators and teacher leaders at Solon, I brought what I thought was a timely plan for our upcoming professional learning day. Per the usual protocol, I asked the leadership team to provide additional guidance: “What were we missing?” and “What do teachers need right now?” As it turns out, the initial plan I presented was not in tune at all with the leadership team’s thinking, so we ended up scrapping the whole thing and starting from a blank slate! This was not very efficient at all! In fact, it took a lot more time to clean up the mess I had created through extra work following the planning meeting. On a good note, the team ended up facilitating a much better professional learning day. In addition, I was told by several teachers that administration earned even more trust with those in the planning meeting, because of the way the situation was handled.

Instructional coaches and others in curriculum leadership roles might model this practice, too, by asking a plethora of questions at planning meetings or in reflective conversations following classroom observations. For those of us who have completed Iowa evaluator approval coursework, ORID (PDF) questions is a possible framework to consider in conversations with colleagues in a meeting. Objective questions such as “Where does this next step fit into our district strategic plan?” can help a team stay focused on the right work. Reflective questions might sound something like: “How do we feel yesterday’s professional learning went?” Interpretive questions could include “What things could we do next week to increase teachers’ understanding of the writing workshop framework?” and “What other ways could we assess the sheltered instruction lessons we observed this morning?” Finally, curriculum leaders might use decisional questions such as “What supports do we need to continue to work on these areas of concern?” to encourage a group towards action. All of these questions are designed to facilitate conversation rather than monopolize it!

Our role as curriculum leaders is also to remember these wise words from Susan Scott (2004), author of Fierce Conversations, “the conversation is the relationship.” When we engage in dialogue with fellow educators, we can develop others’ ability to lead or we can micromanage them. As members of many teams, committees, and task forces, we can aim for collaboration whenever possible or we can be the first to be frustrated. When doing it right, our colleagues should describe our conversations are more often encouraging than demanding. Jim Knight (2015), author of Better Conversations, suggests authenticity and good communication work hand in hand. In other words, our aim should be to “walk the talk” of listening. If our end goal is to seek input frequently and do nothing with it, our motives will quickly be exposed. Knight (2015) encourages educators to know what we believe in and act consistently with those beliefs. May we strive to be known as leaders that listen and seek the input from those around us.

In closing, curriculum leaders who value relationship building seek first to understand and then to be understood, all the while understanding the conversation is the relationship.

Resources to further learning as a leader of relationship building:

Fierce Conversations: Achieving Success at Work and in Our Life One Conversation at a Time, by Susan Scott (2004, Berkley)

The Art of Coaching, by Elena Aguilar (2013, Jossey-Bass)

QBQ! The Question Behind the Question: Practicing Personal Accountability at Work and in Life, by John G. Miller (2004, TarcherPerigee)

Better Conversations, by Jim Knight (2015, Corwin)