Standards-based grading: Big shift #3: Repurposing homework and checks for understanding as ungraded practice

In standards-based grading, teachers repurpose homework and checks for understanding as ungraded practice. In other words, students should be provided opportunities to make mistakes, learn from those mistakes, and the information resulting from these assessments should be used by teachers to inform their instruction.

When I was in middle school, I distinctly remember my math teachers explaining how many points each daily homework assignment was worth and the inherent value of completing each on in a timely manner. It didn’t take too long to figure out that points were the currency of the classroom, and if it meant finding a partner on the bus ride to school for a little extra “assistance” (a.k.a. “copying”), that time often paid off. At the same time, the daily classroom grind often involved learning a new concept such as ratios, attempting problems 1-5 in class with the expectation of completing 6-20 on my own time before the next day. Assuming the purpose of these daily assignments was to practice, expecting perfection on these 14 problems prior to receiving feedback just didn’t seem right. Yet, each day the number of problems I answered correctly was recorded by the teacher in the grade book, which ultimately influenced my end-of-quarter grade.

In standards-based grading, the BIG shift is repurposing homework, mid-unit quizzes, rough drafts of essays, and other assignments designed to check for understanding (rather than summarize learning at the end of the instructional process) as ungraded practice.

One change for teachers using standards-based grading is to move away from reporting points on every single assignment (regardless of its purpose) towards more utilizing narrative feedback during the instructional process. In the example below, students are asked to indicate their perceived level of understanding in pencil for each standard assessed immediately following the completion of a mid-unit math quiz (note the question numbers intended to align with each standard, i.e. 1 & 3 for 5.MD.5) and my feedback to the learner in red which they receive the next day.

This post is the third and final in a series highlighting three big shifts in implementing standards-based grading. See below for the previous two.

  1. Standards-based grading big shift #1: Reporting learning rather than tasks
  2. Standards-based grading big shift #2: A mastery mindset

To learn more about all three of these big shifts, including detailed implementation criteria, pitfalls to avoid, and self-assessment continuums for teachers and collaborative teams, see the book Making Grades Matter: Standards-based Grading in a Secondary PLC at Work, available from Solution Tree Press.

Show me the numbers! Standards-based grading quantitative research

The following is a list of some, but likely not all, dissertations and journal articles investigating standards-based grading using quantitative methods. The summaries were copied and pasted from within the published abstract unless noted otherwise.

While there was an increase in all grading areas, two showed a significant difference—the Physical Science course content average (p = 0.024) and ix the Biology EOCT scores (p = 0.0876). These gains suggest that standards-based grading can have a positive impact on the academic performance of African American students. Secondly, this study examined the correlation between the course content averages and the EOCT scores for both the traditional and standards-based grading system; for both Physical Science and Biology, there was a stronger correlation between these two scores for the standards-based grading system.

Bradburd-Bailey, M. (2011). A preliminary investigation into the effect of standards-based grading on the academic performance of african-american students. (Doctoral dissertation). Available from ProQuest Dissertations and Theses. (3511593). [Available online]

This study examined the correlation between the grades a student earns in his or her classroom and the scores that each student earned on the Colorado Student Assessment Program tests, in Reading, Writing, Math, and Science. The study also examined the mean scores of varying sub-groups to determine if certain sub-groups demonstrated higher means, dependent of the school districts that they were enrolled. While all the school districts that participated in the study showed a significant level of correlation between grades and test scores, Roaring Fork School District Re-1, using a standards-based grading model demonstrated both higher correlations and higher mean scores and grades across, the overall population and sub-groups.

Haptonstall, K.G. (2010). An analysis of the correlation between standards- based, non-standards-based grading systems and achievement as measured by the colorado student assessment program (CSAP). (Doctoral dissertation).  Available from ProQuest Dissertations and Theses (3397087).

From the article, as the abstract was a bit vague communicating results:  “The results of this study provide evidence that a standards-based grading system, as opposed to a traditional-based grading system, is more closely aligned with the results of the Scholastic Math Inventory standardized test” (p. 12).

Lehman, E., DeJong, D., & Baron, M. (2018). Investigating the relationship of standards-based gradesvs. traditional-based grades to results of the scholastic math inventory at the middle school level. Educational Leadership Review of Doctoral Research, 6, 1-16. [Available online]

This study focused on how student learning was impacted when secondary math, science, and language arts teachers use standards-based grading practices in their classrooms. Student learning was measured by term grades and end-of-level SAGE test scores. Results show students who attended a classroom with standards-based grades earned higher GPAs, performed better on the end-of-level test, and had more learning growth over the course of the school year, than their peers who participated in traditional grading classrooms.

Poll, T. R. (2019). Standards-based grading: A correlational study between grades and end-of-level test scores (Doctoral dissertation). Available through ProQuest Dissertations and Theses (13428322). [Available online]

Results indicated that the rate of students earning an A or B in a course and passing the state test approximately doubled when utilizing standards-based grading practices. In addition, results indicated that standards-based grading practices identified more predictive and valid assessment of at-risk students’ attainment of subject knowledge.

Pollio, M. & Hochbein, C. (2015). The association between standards-based grading and standardized test scores as an element of a high school reform model. Teachers College Record, 117(11), 1-28.

Students at both the honors level and the regular level of mathematics were included in the study. Honors students performed better than regular students on both assessments, but no significant difference was found between the performance of traditionally-graded students and the students who were graded with standards-based grading. The results of this study indicate that standards-based grading may offer improved methods of communication between teachers, parents, and students and may give students a new perception of learning. Standards-based grading strategies require careful planning, dedication, and follow through. It is not an endeavor to be entered into lightly, but rather, the appropriate amount of time, resources, and preparation can provide students the chance to truly learn content at a mastery level.

Rosales, R.B. (2013). The effects of standards-based grading on student performance in algebra 2 (Doctoral dissertation). Retrieved from http://digitalcommons.wku.edu/diss/53/

The Second Wave of Standards-Based Grading in Iowa’s Secondary Schools

[Note to readers: This column was recently published in Iowa ASCD’s The Source e-newsletter. My co-authors were Dr. Tom Buckmiller and Dr. Robyn Cooper, both associate professors of education at Drake University]

The Second Wave of Standards-Based Grading in Iowa’s Secondary Schools

In 2013, the first Iowa standards-based grading conference was held in Cedar Rapids.  At that time, only a handful of secondary schools were seriously thinking about standards-based grading (SBG). While Iowa has been a nationwide leader in competency-based education, it should also be noted our interest in standards-based grading practices continues to grow as well. Fast forward to 2020 and the landscape of grading practices in Iowa continues to evolve.  During the past seven years, nearly every notable grading author, speaker, and consultant has visited Iowa, often on multiple occasions, invited directly by schools or via AEAs and other professional organizations. Educators at these early adopting SBG schools attest this work, while important and necessary, has also been challenging.  

On the heels of the first wave (early adopters) of SBG, other secondary school leaders are no doubt weighing the odds of a successful implementation process.  Our research aimed to identify the probability of a second wave of SBG implementation and the perceived barriers the Iowa high school leaders anticipate. We believe the results of this study could assist high school and other secondary building leaders forecast challenges and barriers if they are considering making changes to update their grading and assessment philosophies.

We reached out to all Iowa high school principals not currently implementing SBG in January 2018.  When asked about the likelihood of implementing some form of SBG in the near future and the anticipated barriers, several themes emerged. The first theme was that SBG is indeed in their vision in the next five years.  Nearly 80% of the high school principals who responded indicated SBG was “a part” or “a strong part” of their vision for the next five years. This finding aligned with the attention SBG appears to be given across the state at various conferences and professional learning days.  Knowing SBG is on school leaders’ radar is a good start, however we were also interested in potential barriers.

One of the barriers high school principals identified when thinking about implementing SBG in the next five years was time.  Nearly 20% of those who responded indicated their need to educate their local communities about this significant shift.  Principals also anticipated a significant amount of time would be needed to work with their teachers to both understand and implement SBG.  Similarly, principals anticipated their teachers would need dollops of professional learning to simultaneously understand and implement standards-based grading.  

After an in-depth analysis of principals’ comments, we suggest several potential implications.  For example, schools are advised to “go slow to go fast.” The themes from this study corresponded with our professional experiences and those anecdotally documented by various media outlets: inconsistent and accelerated SBG implementation is an often cited concern.  Schools leaders are advised to keep in mind that adult learning, organizational growth, and community development takes time. In addition, just-in-time, personalized professional learning will be needed to support teachers with varying grading, assessment and pedagogical backgrounds.  

Although SBG is highly defensible with a growing research and literature base, encouraging this shift in practice for teachers who have used traditional grading methods is no easy task.  Both teacher and administrator preparation programs will play an important role in educating future educators of effective grading and assessment practices over time to break up some long-held beliefs.  Furthermore, AEAs and professional organizations such as Iowa ASCD will continue to play an important role in supporting schools in their standards-based grading journeys. For example, Heartland AEA is currently offering “SBL Framework Fridays” once per month to support school teams taking their next steps towards standards-based or standards-referenced grading/reporting.  

Our research suggests a second wave of SBG may be on the horizon in Iowa; however, previous literature suggests secondary principals may lack the instructional leadership capacity to lead, manage, and sustain a change, which so clearly disrupts the way stakeholders think about school.  Finally, additional insight is needed to discern the type of professional learning teachers of various career stages and content areas find meaningful as they strive to carry out more effective grading practices in their classroom.

The next step in supporting the next wave of schools is understanding how many Iowa secondary schools are indeed implementing standards-based grading, a data collection Matt Townsley will soon be summarizing and sharing across the state.  Similar to the collaborative teacher work so many schools are implementing with professional learning communities, data teams, and authentic intellectual work, it makes sense for schools to collaborate and support as they take their next steps implementing effective grading practices.  As the old mantra goes, together we are better, this will indeed be the case for Iowa’s secondary schools banding together for the sake of more transparent communication of student learning.  

Note: This article summarizes a manuscript originally published in the December 2019 issue of NASSP Bulletin

Townsley, M., Buckmiller, T., & Cooper, R. (2019). Anticipating a second wave of standards-based grading implementation and understanding the potential barriers: Perceptions of high school principals. NASSP Bulletin, 103(4), 281-299. https://doi.org/10.1177/0192636519882084

Top 5 standards-based grading articles (2019)

In the past, I curated and recommended lists of standards-based grading books and articles for practitioners and fellow researchers to read. A bulleted list of links to these lists is below.

A year has passed, therefore it was once again time to sift through the research and commentary from the past twelve months. Without further ado, I present to you the top five (5) articles written in the area of standards-based grading practices (in alphabetical order by lead author’s last name).

  1. Battistone, W., Buckmiller, T., & Peters, R. (2019). Assessing assessment literacy: Are new teachers prepared to assume jobs in school districts engaging in grading and assessment reform efforts? Studies in Educational Evaluation, 62, 10-17 https://doi.org/10.1016/j.stueduc.2019.04.009

    As school leaders embark upon grading shifts, they may question whether or not new teachers are prepared to engage in this type of work. This study suggests teacher education training on assessment is inconsistent at best with several corresponding implications for educators.
  2. Feldman, J. (2019). Beyond standards-based grading: Why equity must be part of grading reform. Kappan, 100(8), 52-55. [Available online]

    Joe Feldman lays out a clear argument for traditional grading as a means of perpetuating inequity. Beyond the typical psychometric, social, and logical arguments for standards-based grading, Feldman incorporates a number of helpful examples of how institutional grading bias can and should be overcome in order to create a more equitable learning environment for all students.
  3. Knight, M. & Cooper, R. (2019). Taking on a new grading system: The interconnected effects of standards-based grading on teaching, learning, assessment, and student behavior. NASSP Bulletin, 103(1), 65-92. https://doi.org/10.1177/0192636519826709

    One often overlooked reason for implementing standards-based grading is how it benefits teachers’ instructional practices. Teachers interviewed in this study report a number of benefits including a more coherent focus in their teaching, and a sense of more purposeful instruction that is more conducive to student needs while enhancing student growth mind-set and ownership.
  4. Guskey, T. R. & Link, L. (2019). The forgotten element of instructional leadership: Grading. Educational Leadership, 76(6).  [Available online]

    Many school leaders may not consider shifts in grading because they lack training on effective grading practices and/or teacher evaluation systems do not often prioritize grading. The authors suggest school leaders study effective grading policies and practices, promote teacher collaboration focused on grading, and clarify the purpose of grading throughout the school. I highly recommend all school leaders take the time to read this article in 2020.
  5. Townsley, M., Buckmiller, T., & Cooper, R. (2019). Anticipating a second wave of standards-based grading implementation and understanding the potential barriers: Perceptions of high school principals. NASSP Bulletin, 103(4), 281-299. https://doi.org/10.1177/0192636519882084

    Full disclosure: This was one of several articles I authored or co-authored on grading in 2019, but it was my favorite one, so I could not leave it off the list!
    Our research followed up on a study from 2014 to identify the challenges secondary school leaders experience when changing the currency of the classroom from points to learning. The results indicated that the game is changing and a new wave of SBG implementation is on the horizon.

What articles would you add to this list from 2019?

How do teachers determine letter grades and GPAs from standards? (Standards-Based Grading)

The purpose of standards-based grading/reporting is to communicate students’ current strengths and areas for improvement relative to course or grade-level standards. It may seem counter-productive to “go back” to using letter grades once a student’s level of learning has been described using an integer scale (i.e. 1-4) with corresponding descriptions of learning. Because some secondary schools may have a need to determine letter grades and grade point averages, the purpose of this post is to describe several ways to make this happen. The first step is to determine a standard score. The second and final step is to determine a final grade based upon the standard scores using one of three methods. In other words, a secondary school using standards-based grades does not need to fully eliminate reporting letter grades, if there is a compelling reason to do so.

Determining a standard score (level of learning)

No mathematical formulas are needed to determine a student’s standard score. Let’s consider Tyler, a student who has a “3” level of learning (demonstrates understanding with minor errors) right now in the grade book for a high school math standard, “Represent data with plots on the real number line” (HSS.ID.A.1). After working with the teacher to complete a re-learning plan, Tyler completes a new assessment on data plots. His teacher determines that he now has a “4” level of learning (demonstrates proficient understanding of the standard). Because we want to communicate Tyler’s current level of learning, the teacher would erase the 3 in the grade book and replace it with a 4 rather than averaging these two attempts.

Determining a letter grade based upon the standard scores

In my experience and observation, schools have used one of three methods when converting standards to letter grades in a standards-based grading environment. I will explain each one in detail below using the following fictitious grade book, which we’ll assume represents a student named Cassy’s level of learning in math near the end of a reporting period.

[Cassy’s level of learning in math near the end of the reporting period (sample)]

Convert to Percentages Method
In order to determine a letter grade using the convert to percentages method, use the following steps:

  • Add up all of the standard scores.
  • Divide it by the total number of standard scores possible.
  • Use the school’s typical 90%, 80%, 70%, etc. percentage scale to determine the letter grade.

Using Cassy’s math standards and levels of learning above, she currently has 34 standard scores (4+4+3+4+4+4+4+2+4+1). The total number of standard scores possible is 40 (4 scale x 10 standards). Using a typical 90, 80, 70, 60 scale, Cassy has 85% (34/40=85%), therefore her letter grade would be a B using the percentages method.

The convert to percentages method will work with many electronic grade books and likely makes sense for parents and students who are used to a points/percentages grading system. At the same time, this method has a “points and percentages” feel to it which is less-than-ideal for communicating student learning in standards-based grading.

Marzano Method

This method of determining letter grades comes from literature written by Robert Marzano and his colleagues. In order to determine a letter grade using the Marzano Method, use the following steps:

  • Average the standard scores.
  • Apply the following conversion scale
[Source: Marzano, R. J. (2010) Formative assessment & standards-based grading. Bloomington, IN: Marzano Research Laboratory.]

Using Cassy’s math standards and levels of learning, her average is 3.4 (34/10). Based upon the conversion scale above, her letter grade would be an A.

It is important to know in Marzano’s writing, “4” often describes learning exceeding the standard rather than proficient, a distinction I recommend reading about in a write-up by Dr. Thomas Guskey here. This may explain why a student with an average of 3.0 to 4.0 receives some flavor of an “A” while a student must have an average of 2.5 to 2.99 to earn some type of a “B” in this scale.

I am also not sure how many electronic grade books will determine a letter grade using the Marzano method, therefore manually overriding final letter grades may be necessary.

Logic Rule Method

In order to determine a letter grade using the logic rule method, use the following steps.

  • Determine a logic rule for your classroom / school
  • Count up the number of 4s, 3s, 2s, 1s, etc. currently in the grade book.
  • Apply the logic rule.

The following logic rule is adapted from Ken O’Connor’s book How to Grade for Learning, 4th edition. This is only a sample and could be revised (for example) to communicate plus and minus grades.

A = Student has demonstrated level 3 and level 4 understanding for all standards with a majority of 4s. No standard scores are below 3.
B = Student has demonstrated a mix of level 3 and level 4 understanding for all standards with a majority of 3s. No standard scores are below 3.
C = Student has demonstrate a mix of level 2, level 3 and level 4 understanding for all standards with a majority of 3s. No standard scores are below level 2.
D = Student has demonstrated a mix of level 2, level 3 and level 4 understanding with a majority of 2s. No standard scores are below level 2
F = Student has at least one standard score of 1 or 0.

Using Cassy’s math standards and levels of learning, she has a “1” right now on the standard “Use data from a sample survey to estimate a population mean…” therefore her letter grade is an F. At first glance, this particular logic rule may seem harsh, however it should be noted that students are provided multiple opportunities to demonstrate understanding in standards-based grading, therefore Cassy is has ample opportunity to improve based upon supports provided by the teacher.

Few, if any major electronic grade books currently can determine a letter grade using a logic rule, therefore the teacher would need to manually override the letter grade.

Calculating a grade point average using standards-based grading

Calculating a grade point average (GPA) in standards-based grading follows the local school guidelines, likely similar to when the school previously used points and percentages to determine letter grades. Once a letter grade has been determined from standard scores using one of the methods described above for each course, a grade point average can be calculated for honor roll and/or high school transcript purposes.

Closing Thoughts

Because some secondary schools may have a need to determine letter grades and grade point averages, the purpose of this post was to describe several ways to make this happen. School leaders should consider the pros and cons of each standard score to letter grade conversion method when making a local determination for their building or district.

Anticipating a Second Wave of Standards-Based Grading Implementation and Understanding the Potential Barriers: Perceptions of High School Principals

As secondary school leaders consider a shift toward standards-based grading (SBG) practices, they are no doubt weighing the odds of a successful implementation process. This research followed up on a study from 2014 to identify the challenges secondary school leaders experience when changing the currency of the classroom from points to learning. The results indicated that the game is changing and a new wave of SBG implementation is on the horizon.

This peer-reviewed article was published in the December 2019 issue of NASSP Bulletin.

Townsley, M., Buckmiller, T., & Cooper, R. (2019). Anticipating a second wave of standards-based grading implementation and understanding the potential barriers: Perceptions of high school principals. NASSP Bulletin, 103(4), 281-299.

Standards-based Grading: BIG Shift #2 – A Mastery Mindset

In standards-based grading, teachers have a mastery mindset. In other words, classroom structures and routines are setup to maximize student learning, regardless of when they learn it.

In my experience as a K-12 student, and perhaps yours as well, each unit of study took several weeks or more and ended with some type of culminating assessment (test, project, essay, speech, etc.). The level of understanding I demonstrated at the end of the unit was written in ink. There was nothing I could do to change this static mark regardless of my future level of learning. Once the doors had been shut on the unit, few, if any, opportunities existed to remediate and/or show I had a deeper understanding.

Every educator I have worked with agrees that students learn at different rates and different paces. As such, in standards-based grading, the BIG shift is thinking about learning as dynamic rather than static within a reporting period. When students have demonstrated a higher level of understanding following some type of new learning activities, marks in the grade book or report card are revised according. Because our mindset is focused on mastery, we think of learning as documented in pencil during any given quarter, trimester or semester.

In an upcoming post, I will share the next BIG shift of standards-based grading: repurposing homework and checks for understanding as ungraded practice.

Standards-Based Grading: BIG Shift #1 – Reporting Learning Rather than Tasks

In standards-based grading, teachers communicate goals of learning rather than tasks. In other words, learning is communicated in relation to the course outcomes rather than the activities (homework, quiz, project, essay, etc.) demonstrating the learning outcomes.

For many years in education, this has been the default means of communication to students and parents:

However, 14 out of 16 points does not tell John or his parents the areas in which he has successfully learned the course outcomes and the areas in which John still needs to improve.

In standards-based grading, the BIG shift is seeing learning outcomes (often called “standards” in K-12 schools) reported in grade books and/or report cards.

In an upcoming post, I will share the next BIG shift of standards-based grading: a mastery learning mindset.

Leaders of Performance: Planning with the End in Mind

[Note to readers: This column is part of an ongoing series for Iowa ASCD’s The Source e-newsletter.]

Leaders of Performance: Planning with the End in Mind

What does it mean to be a curriculum lead?  This is the sixth column in a series for Iowa administrators, teacher leaders and anyone else interested in enhancing curriculum leadership.  Over the past year or so, we’ve discussed the work of curriculum, instruction, and assessment; data analysis, processes, professional development, and relationship building.  This week, we’ll be taking a closer look at what it means to be a leader of performance.  Future columns will consider the remaining two facets of curriculum leadership: operations, and change.

According to the functions of our work, curriculum leads, “…model, expect, monitor, and evaluate continuous learning of all students and staff members.”  Monitoring and evaluation matters! One way of thinking about this is that curriculum leaders ought to be thinking about what the desired outcomes are when taking on a new program or initiative.  Any educator who has been around a while knows our profession is pretty good at trying new things and/or renaming old ones.  First, it was “instructional decision making,” then it was renamed “response to intervention” and now we call it “multi-tiered system of supports.”  Last year, our district focus may have been project-based learning and this year it is creating profiles of a graduate.  Twenty years ago, we were writing standards and benchmarks from scratch (or let’s be honest, borrowing them from the district next door!), and now we’re digging into the latest iteration of the state’s content standards.  I don’t mean to sound cynical in making these observations, but instead to suggest how much time is allocated towards improving schools.  We can and should be working towards a culture of continuous improvement.  This sometimes means dropping old ideas that do not work and/or trying out new ones.  As curriculum leaders, our role is to model and evaluate these changes.  As such, the purpose of this column is to suggest several practical ways leaders can evaluate and monitor change rather than starting and stopping them without attention towards fidelity of implementation.    

One way to think about monitoring and evaluation is to make connections with quality curriculum, instruction and assessment practices at the classroom level.  In the Understanding by Design framework, McTighe and Wiggins suggest “effective curriculum is planned backward from long-term, desired results through a three-stage design process (Desired Results, Evidence, and Learning Plan).”  In other words, classroom teachers should consider where they want students to be at the end of a unit, course or reporting period and plan backwards.  The same concept can and should apply to any major building or district wide curriculum change.  Curriculum leaders in tune with the performance function should first consider what success could looks like at the end of the school year.  Unfortunately, I was guilty on more than one occasion of launching a new idea without being able to articulate what “success” would look like as the year progresses. 

Instructional coaches and others in curriculum leaders roles are in an excellent position to ask each other questions such as:

  • “If we implemented this change with fidelity, what would our teachers be doing differently in May when compared to September?” and
  • “If we excelled at this change, what would our students be doing differently in May when compared to a year ago?” 

Too often, we’re guilty of providing a wonderful splash event kicking off the school year which encourages all staff to think more deeply about grading practices, social emotional learning or trauma-sensitive schools.  We might even follow it up by seeking feedback from teachers on their perceptions of the August workshop and using this information to plan a follow-up in October.  An alternative approach might be to cast a tangible vision of what the staff and students would be doing differently six months later, and regularly providing staff with feedback along a continuum as they work towards this ideal state. 

One such tool used to monitor and evaluate an educational change is an innovation configuration map.  In case this is a new concept to you, “An IC Map specifies behaviors and expectations related to implementing a curriculum, intervention, or evidence-based practice and categorizes these behaviors on a spectrum from ideal to less than ideal” (REL).  An excerpt from one of Central Rivers AEA’s innovation configuration maps on number talks is included below.

Note how a teacher could self-assess to determine the level in which he/she has an environment conducive to number talks.  Additional components of the unabridged innovation configuration map for math talks include teacher role in student discourse; teacher questioning; teacher notation; academic language; and instructional time, to name a few.  Through the use of this tool, a teacher sees that he/she should be working towards a new type of seating, using student hand signals, implementing specific questioning techniques, and utilizing math problems with increased rigor to make connections with previous learning.  Although the August workshop on math talks may provide an overview of what math talks are and are not, an innovation configuration map can give all educators a description of the desired state.  Similarly, curriculum leaders may ask teachers to self-assess their current progress along the IC map.  This information could be used to plan next steps in professional learning and to celebrate interim progress.  At the end of one or more years (assuming countless hours of professional learning, coaching and support, of course!), elementary teachers may be expected to be fully implementing math talks in their classrooms.  The innovation configuration map provides a visual of what this change (math talks) looks like in the end, when implemented with fidelity. 

Our role as curriculum leaders is to monitor and evaluate the changes initiated by the department of education or local administration.  We owe it to our colleagues to show them what success looks like early and often.  An innovation configuration map is one possible tool to assist in this quest towards comprehensively monitoring and evaluating professional development.  Dr. Tom Guskey’s five level professional development evaluation framework suggests educators’ use of new knowledge and skills takes time and as such, requires ongoing evaluation:

“…Did the new knowledge and skills that participants learned make a difference in their professional practice? The key to gathering relevant information at this level rests in specifying clear indicators of both the degree and the quality of implementation. Unlike Levels 1 and 2, this information cannot be gathered at the end of a professional development session. Enough time must pass to allow participants to adapt the new ideas and practices to their settings. Because implementation is often a gradual and uneven process, you may also need to measure progress at several time intervals.”

Guskey, 2002

Implementing a new instructional practice takes time.  I suggest that curriculum leaders move away from “spray and pray” professional learning in which we hope scattering snippets of training and support will somehow find their way into all classroom.  If blended learning is the professional learning focus of the year (or the next few years), curriculum leaders should begin with the end in mind, share the success criteria, and provide regular support to help everyone achieve the intended results.   If collaborative teams within a professional learning philosophy are the improvement focus in 2019-20, all teachers should know what an effective teaming environment looks like, does not look like, as well as where they’re at along the implementation continuum. 

In closing, curriculum leaders who value performance know that monitoring and evaluation matters

Resources to further learning as a leader of performance:

  • Understanding by Design Guide Set by Wiggins and McTighe (2014, ASCD)
  • Evaluating Professional Development by Guskey (1999, Corwin)