A comprehensive list of scholarly articles related to standards-based grading. This resource is updated as new articles and studies are published.
What does the research say about standards-based grading?
A research primer [printer-friendly pdf]
Authors: Matt Townsley and Tom Buckmiller, Ph.D.
One hundred years, No research to support.
Traditional grading practices have been used for over one hundred years, and to date, there have been no meaningful research reports to support it (Marzano, 2000). In an era of data-driven decision making, that’s critical to note. Most teachers have not received adequate training in reliable and valid assessment methods in their teacher preparation and often default to the way they saw their teachers grade when they were in school. As a result, grading practices may vary widely from teacher to teacher (Reeves, 2004) based on style, preference, and opinions and without a research-driven rationale (Cox, 2011; Guskey & Bailey, 2001; Zoeckler, 2007). Contributing to this irregularity is the fact that many schools lack a specific, unified grading policy for teachers (O’Connor, 2009). Parents of students today were also graded using traditional methods (we all were) and thus this wildly inconsistent way of communicating achievement and growth of students has been entrenched and accepted in the way we think about schooling.
The absence of research supporting traditional grading practices is concerning. As schools continue to adopt a standards-based approach to teaching, learning, and assessment, it is critical to understand the research literature on the topic. The purpose of this primer is to provide an overview of the research literature on the topic of standards-based grading.
Why change grading practices?
There are two fundamental reasons why traditional grading practices ought to be re-assessed. First, the Common Core has helped make learning targets more rigorous, consistent, and transparent. The focus has been to create fewer standards but challenge students to think deeper and work towards more meaningful applications. Previous iterations of school curricula have focused on far-reaching and low-level rote learning (memorizing facts). Thus traditional grading practices were perhaps a more appropriate way to measure how a student is doing in school back then. But today grading experts (Guskey, 2014; Marzano, 2000; O’Connor, 2009; Reeves, 2008) agree teachers should update their grading practices to better align with the realities of how and what students are learning in schools.
Second, Every Student Succeeds (formerly No Child Left Behind) has changed the way school leaders and teachers operate. These educational laws mandate that schools may no longer simply fail students who don’t learn, and move on (Vatterott, 2015). Instead, all students must be proficient. School leaders must now ensure their system’s purpose is to develop talent rather than merely sort it (Guskey, 2011). Thus, higher scrutiny and accountability over the measurement of student achievement has demanded grades be more reflective of learning. No Child Left Behind initiatives have exposed that traditional grading practices may no longer be an effective way of measuring student progress in the classroom because they do not equate or correlate with performance on standardized tests (Vatterott, 2015).
What is standards-based grading?
Studies show standards-based teaching practices correlate to higher academic achievement (Craig, 2011; Schoen, Cebulla, Finn, & Fi, 2003). Therefore, it is critical that teachers also link assessments and reporting to the standards (Guskey, 2001). Beatty (2013) suggests standards-based grading (SBG) is based upon three principles. First, grades must have meaning. Indicators, marks and/or letters should provide students and parents with information related to their strengths and weaknesses, separating out non-academic behaviors. Second, classroom-grading systems must incorporate multiple opportunities for students to demonstrate their understanding based on feedback. The final principle of standards-based grading is separating academic indicators from extraneous factors such as homework completion and extra credit.
Principle 1: Grades should have meaning
Grades should provide meaningful feedback to students, document their progress, and help teachers make decisions about what instruction a student needs next (Wormeli, 2006). Traditional grades and report cards are muddied and misleading when they combine both academic factors and non-academic factors into a single grade. Non-academic behaviors are important and merit their own reporting mechanism because they matter in college and in a career. These behaviors include factors such as punctuality, work ethic, attendance, participation, and ability to meet deadline. But when these behaviors are combined with academic information (does my child know how to do algebra?) to form a single grade, learners and their parents can be deceived by a false and inaccurate calculation. Vatterott (2015) gives these examples:
A student can compensate for low understanding of the content and standards by maintaining perfect attendance, turning in assignments on time, and behaving appropriately in class. A different student may understand content and standards perfectly well but receive a low grade because he or she is late to class, fails to turn in assignments on time or acts inappropriately (p. 63-54).
A grading system shouldn’t allow a student to mask their level of content understanding with their attendance, their effort level or other peripheral issues (Scriffiny, 2008). These are separate issues and should be reported separately. Instead, a grading system should be based upon clear learning targets, a practice in which Marzano (2003) supported because students perform up to 20 percent higher compared to instruction without clear targets.
Principle 2: Multiple opportunities to demonstrate learning based on feedback
Wormeli (2011) proposed allowing “redos” and retakes, a practice often ignored in traditional grading. He argued retakes are necessary in order for the grade to truly capture student growth at the time of reporting rather than a single moment in the past. According to Marzano and Heflebower (2011), if the purpose of a grade is to report mastery, then educators must look for evidence of learning over time with multiple opportunities for updates.
Standards-based grading is a logical extension of this idea, and allows teachers to provide clearer and more effective feedback when compared to traditional letter grades. Haystead and Marzano (2009) conducted a comprehensive review of studies on classroom instructional strategies, concluding the use of scoring scales and tracking student progress over time towards a learning goal yielded a 34 percentage point gain. When students were provided additional time and feedback for the purpose of learning the intended standards, strong evidence indicated a positive correlation between added instructional time and achievement (see Brown & Saks, 1986 for seminal work).
Principle 3: Putting homework and extra credit in its proper place
Although assigning high grades as rewards can sometimes motivate students (Guskey & Bailey, 2001; Marzano, 2000), assigning low grades as punishment does not encourage students to do better (Dueck, 2014; Guskey, 2000; Guskey & Bailey, 2001; Marzano, 2000; O’Connor, 2009, 2011; Wormeli, 2006). Furthermore, grades used as external incentives can sometimes lead to decreased motivation (Guskey, 2011), diminished performance, addictive behaviors, or cheating (Matthis, 2010).
In a meta-analysis of the research on homework, Cooper, Robinson and Patall (2006) described a connection between homework and student learning lasting through the unit test, but not any longer. The limited nexus between homework and more long-term indicators suggests the predictability of student learning is better measured with more formal measures such as tests, essays and other classroom assessments. Furthermore, educational assessment experts recommend all formative work (that is, intended for practice) should not be included in the final grade (Stiggins, Frisbie & Griswold, 1989)
Extra credit is problematic in that the students who would benefit the most from completing it are often not the ones taking advantage of it (Harrison, Meister & LeFevre, 2011; Moore, 2005). More succinctly, awarding extra credit in classrooms has the potential to artificially widen the gap between students performing well and those who are struggling.
We can do better
In the past century, everything from modern medicine to personal computing has evolved and improved; yet our educational system’s grading practices have remained the same, despite a lack of supporting evidence. A standards-based system of assessment seems to be a significant and defensible improvement over traditional grading practices. The logical alignment of a standards based approach with Common Core standards, the advocacy by a growing number of respected educational leaders and researchers, and the positive results experienced by many of its early adopters signals that SBG is positioned to gain traction in more schools (Peters & Buckmiller, 2014). While studying standards-based pilot programs in Kentucky, Guskey, Jung, and Swan (2011) found teachers and families nearly unanimous in their agreement that standards-based reports provided better and clearer information. Thus, the power of SBG lies in the opportunity for a more nuanced and focused conversation between parents and teachers about where students are strong, where they are weak, and how each can help the student (Spencer, 2012). With supporting literature and a growing body of research validating SBG, stakeholders can rest assured that our most important resource, our students, will benefit from this shift.
Beatty, I. D. (2013). Standards-based grading in introductory university physics. Journal of the Scholarship of Teaching and Learning, 13(2), 1-22. Retrieved from http://josotl.indiana.edu/article/view/3264
Brown, B. W. & Saks, D. H. (1986). Measuring the effects of instructional time on student learning: Evidence from the beginning teacher evaluation study. American Journal of Education, 94(4), 480-500. Retrieved from http://www.jstor.org/stable/1085338
Cooper, H., Robinson, J. C., & Patall, E. A. (2006). Does homework improve academic achievement? A synthesis of research, 1987–2003. Review of Educational Research, 76(1), 1–62. doi: 10.3102/00346543076001001
Cox, K. B. (2011). Putting classroom grading on the table: A reform in progress. American Secondary Education, 40(1), 67-87.
Craig T. A. (2011). Effects of standards-based report cards on student learning. (Doctoral dissertation). Retrieved from https://repository.library.northeastern.edu/files/neu:1127
Dueck, M. (2014). Grading smarter, not harder: Assessment strategies that motivate kids and help them learn. Alexandria, VA: ASCD.
Guskey, T. R. (2000). Grading policies that work against standards…And how to fix them. NASSP Bulletin, 84(620), 20–29. doi:10.1177/019263650008462003
Guskey, T. R., & Bailey, J. M. (2001). Developing grading and reporting systems for student learning. Lexington, KY: Corwin.
Guskey, T. R., Swan, G. M. & Jung, L. A. (2011). Grades that mean something: Kentucky develops standards-based report cards. Kappan, 93(2), 52-57.
Guskey, T. R. (2011). Five obstacles to grading reform. Educational Leadership, 69(3),16-21.
Guskey, T. R. (2014). On your mark: Challenging the conventions of grading and reporting. Bloomington, IN: Solution Tree.
Harrison, M. A., Meister, D. G., & LeFevre, A. J. (2011). Which students complete extra-credit work? College Student Journal, 45(3), 550-555.
Haystead, M. W., & Marzano, R. J. (2009). Meta-analytic synthesis of studies conducted at Marzano Research Laboratory on instructional strategies. Englewood, CO: Marzano Research Laboratory. Retrieved from http://www.marzanoevaluation.com/files/Instructional_Strategies_Report_9_2_09.pdf
Matthis, T. L. (2010). Motivational punishment: Beaten by carrots and sticks. EHS Today. Retrieved from http://ehstoday.com/safety/news/motivational-punishment-beaten-carrots-sticks-1120.
Marzano, R. (2000). Transforming classroom grading. Alexandria, VA: ASCD.
Marzano (2003) What Works in Schools: Translating Research into Action. Alexandria, VA: ASCD.
O’Connor, K. (2009). How to grade for learning, K-12 (3rd ed.). Thousand Oaks, CA: Corwin.
Marzano, R. J., & Heflebower, T. (2011). Grades that show what students know. Educational Leadership, 69(3), 34-39.
Moore, R. (2005). Who does extra-credit work in introductory science courses? Journal of College Science Teaching, 34(7), 12-15.
Peters, R. & Buckmiller, T. (2015). Our grades were broken: Overcoming barriers and challenges to implementing standards based grading. Journal of Educational Leadership in Action, 4.
Reeves, D. (2004). Making standards work: How to implement standards-based assessments in the classroom, school, and district. Englewood, CO: Advanced Learning Press.
Reeves, D. B. (2008). Effective grading practices. Educational Leadership, 65(5), 85-87. Retrieved from http://www.ascd.org/publications/educational-leadership/feb08/vol65/num05/Effective-Grading-Practices.aspx
Scriffiny, P. L. (2008). Seven reasons for standards-based grading. Educational Leadership, 66(2), 70-74.
Schoen, H.L., Cebulla, K.J., Finn, K.F., and Fi, C. (2003). Teacher variables that relate to student achievement when using a standards-based curriculum. Journal for Research in Mathematics Education, 34(3), 228-259.
Stiggins, R. J., Frisbie, D. A. & Griswold, P. A. (1989). Inside high school grading practices: Building a research agenda. Educational Measurement: Issues and Practice, 8(2), 5-14. doi: 10.1111/j.1745-3992.1989.tb00315.
Spencer, K. (2012). Standards-based grading. Education Digest, 78(3).
Vatterott, C. (2015). Rethinking grading. Alexandria, VA: ASCD.
Wormeli, R. (2006). Fair isn’t always equal: Assessing grading in the differentiated classroom. Portland, ME: Stenhouse.
Wormeli, R. (2011). Redos and retakes done right. Educational Leadership, 69(3), 22-26.
Zoeckler, L. G. (2007). Moral aspects of grading: A study of high school English teachers’ perceptions. American Secondary Education, 35(2), 83-102.
I received some positive feedback from the Top 10 Standards-Based Grading Articles list, so I thought it might be helpful to share a similar list of books¹.
- O’Connor, K. (2009). How to grade for learning, K-12 (3rd ed.). Thousand Oaks, CA: Corwin Press.
Ken O’Connor has written a number of books and articles geared toward practitioners. How to Grading for Learning was helpful for me to think through several components of grading I needed to change in my own classroom. These components include “basing grades on standards” and “emphasizing most recent information.” There’s a reason the grade doctor’s books are so popular!
- Guskey, T.R. (2015). On your mark: Challenging the conventions of grading and reporting. Bloomington, IN: Solution Tree.
No top ten list of standards-based grading books would be complete without at least one written by Dr. Tom Guskey. On Your Mark is a comprehensive piece written for an audience who needs to understand why grading practices need to change. I envision these chapters as meaningful content for book study teams in schools across the country.
- Wormeli, R. (2006). Fair isn’t always equal: Assessing and grading in the differentiated classroom. Portland, OR: Stenhouse.
Rick Wormeli is an author and former middle school practitioner. This book tackles concepts such as redos and retakes, the role of homework in the final grade and setting up grade books that reflect student learning. I often categorize Wormeli’s work as less standardized than Marzano and more practical than Guskey.
- Jung, L. & Guskey, T.R. (2012). Grading exceptional and struggling learners. Thousand Oaks, CA: Corwin Press.
Not sure what the role of ELL and special education students is within a standards-based grading context? When are accommodations appropriate? When should modifications be made to the standards themselves? This book has some answers!
- Guskey, T. R., & Jung, L.A. (2013). Answers to essential questions about standards, assessments, grading, & reporting. Thousand Oaks, CA: Corwin Press.
If educators are looking for a book in the form of frequently asked questions, this is it. Beyond theory and outside of day-to-day classroom practice, Guskey and Jung lay out responses to questions teachers, administrators, parents and school board members may have about non-traditional grading practices.
- Brookhart, S. M. (2013). Grading and group work. Alexandria, VA: ASCD.
Group work is still a valuable part of standards-based grading classrooms! Susan Brookhart helps readers understand the difference between learning in collaborative groups and assessing group work. Any teacher or school moving towards standards-based grading would benefit from understanding these ideas early on in the process.
- Reeves, D. (2010). Elements of grading: A guide to effective practice. Bloomington, IN: Solution Tree.
I have always appreciated Dr. Doug Reeves as a speaker and author. This book is no exception. Reeves blends together research, logic and examples from schools to help readers think through toxic grading practices and their solutions. Keep an eye out for the second edition of this book!
- Fisher, D., & Frey, N. (2007). Checking for understanding: Formative assessment techniques for your classroom. Alexandria, VA: ASCD.
Fisher and Frey’s book holds a special place in my heart, because the day I had the initial “I’d like to try out standards-based grading in my classroom” discussion with my high school principal, he handed me this book as a resource. I believe grading and assessment practices need to go hand in hand. This book provides more than enough practical tips and strategies for a classroom teacher to try out in a school year.
- Marzano, R. J. (2010). Formative Assessment & Standards-Based Grading. Bloomington, IN: Marzano Research Laboratory.
It would have been hard to create a top ten standards-based grading books without including this Marzano text. Of all of the books I’ve read the past ten years, this was the most highly anticipated one, however I cannot recommend all of the ideas presented for across-the-board use. Marzano uses a formulaic way of creating tiered assessments that, while easily scalable across multiple classrooms and buildings, appears to go against my beliefs about authentic and meaningful classroom assessment.
- Heflebower, T., Hoegh, J.K., & Warrick, P. (2014). A school leader’s guide to standards-based grading. Bloomington, IN: Marzano Research Laboratory.
See previous comments about the Marzano book on formative assessment and standards-based grading. It would also be hard to create a list without including this book, because it is the only book I know of focused on school leaders. Enjoy!
What books would you add to this list?
¹All of the books on this list are focused on effective grading practices with or without a strong “standards-based” grading title.
UPDATE: For articles published in 2016 or later, see this list.
Every once in a while, I receive an email from an educator or parent interested in standards-based grading (SBG) and he/she asks for an introductory reading list. I typically attach several of my favorites and then link to an ongoing list of articles curated during the past several years for further reading. Earlier this week, a professional acquaintance suggested I share a top ten standards-based grading articles list. Challenge accepted!
Without further ado, here is my top ten standards-based grading articles¹.
- Scriffiny, P.L. (2008). Seven reasons for standards-based grading. Educational Leadership, 66(2), 70-74 [Available online]
Patricia Scriffiny is a math teacher who mixes in the “why” of standards-based grading with a few of her own classroom examples. Any school or department considering the shift to SBG could use this article as a conversation starter.
- Peters, R. & Buckmiller, T. (2014). Our grades were broken: Overcoming barriers and challenges to implementing standards-based grading. Journal of Educational Leadership in Action, 2(2). [Available online]
Two Drake University researchers interviewed a number of building and district administrators in order to describe the ups and downs of implementing SBG systemwide. Barriers in the process included: student information and grading systems, parents/community members, the tradition of grading and fear of the unknown, and the implementation dip. I’ll let you read the rest!
- Winger, T. (2005). Grading to communicate. Educational Leadership, 63(3), 61-65. [Available online]
In the summer workshops I’ve facilitated, Winger’s article is almost always a hit. Tony is a practicing educator who mixes in thought provoking questions with his own classroom reality. Questions such as “do grades interfere with learning?” and “do grades provide accurate feedback?” are bound to stir up some heated conversations amongst educators at all grade levels.
- Erickson, J.A. (2011). A call to action: Transforming grading practices. Principal Leadership, 12(1), 42-46. [pdf]
Jeffrey Erickson is a practicing school administrator who writes about his experiences changing grading practices in a suburban high school. While his ideas don’t quite meet my personal idea of standards-based grading (e.g. homework still counts towards a small percentage of the final grade), I believe his ideas are on the right track and worth sharing with others.
- Clymer, J.B., & Wiliam, D. (2006). Improving the way we grade science. Educational Leadership, 64(4), 36-42. [Available online]
Looking for a practical view into a standards-based grading classroom? This is it! Eighth grade science teacher Jacqueline Clymer shares a sample grade book and a summary of student reaction to standards-based grading in the classroom. The obvious target audience is science teachers who want to “see” SBG in action.
- Jung, L., & Guskey, T.R. (2011). Fair & accurate grading for exceptional learners. Principal Leadership, 12(3), 32-37. [pdf]
Hold on…what about students with special needs?! University of Kentucky researchers LeeAnn Jung and Thomas Guskey team up to communicate, “standards-based grading is the most accurate method to assess students’ abilities.” Students with IEPs and English language learners may need modifications or accommodations and this article describes how to fairly do so in a standards-based grading setting.
- Iamarino, D. (2014). The benefits of standards-based grading: A critical evaluation of modern grading practices. Current Issues in Education, 17(2). [Available online]
In this peer-reviewed article, the author examines the literature to evaluate various grading practices before concluding “modern grading practices are rife
with complexity and contradiction. They are remnants of archaic conventions, and hybrids of newer methodologies not yet tried by time and application” (p. 9). I wouldn’t recommend this piece as a first read, but rather for educators with a more philosophical or theoretical bend.
- Wormeli, R. (2011). Redos and retakes done right. Educational Leadership, 69(3), 22-26.
Reassessments are one of the most hotly contested aspects of standards-based grading from the perspective of teachers and parents. Wormeli’s article describes compelling reasons reassessments make sense while providing teachers a list of practical strategies to try out in their classrooms.
- Guskey, T.R. (2013). The case against percentage grades. Educational Leadership, 71(1), 68-72.
This article alone is worth the price of purchasing the September 2013 issue of Educational Leadership. Dr. Guskey briefly describes the history of grading and goes on to differentiate percentage grades from percentage correct. Not sure why a 4 or 5 point scale is more accurate and appropriate when compared to a 100 point scale? This is your go-to source.
- Vatterott, C. (2011). Making homework central to learning. Educational Leadership, 69(3), 60-64.
Any meaningful conversation about grading practices involves the purpose of homework. Dr. Cathy Vatterott is often coined “The Homework Lady.” This article provides schools a framework to consider in order to unify educators around the purpose and emphasis of homework within standards-based grading.
What articles would you add to this list?
¹Articles must describe the why and/or how of effective grading practices, and priority was given to articles available publicly online.
Static URL: http://tinyurl.com/top10sbg
PK-12 education is funded as a social service, yet we’re expected to produce Cadillac-like results.
Think about that for a moment. When a primary breadwinner loses his/her job in the United States, unemployment is a benefit intended to assist, but not fully replace, the financial needs of the household. Social security was never intended to serve as an individual’s sole retirement. City parks are available for families without their own green space to enjoy. All of these social services were created to supplement, not supplant.
It should not be a surprise state departments of education, school boards and local administrative cabinets frequently add to the list of school initiatives in order to meet the public’s growing expectations. Yet, time and financial resources often remain stagnant.
“The Law of Initiative Fatigue states that when the number of initiatives increases while time, resources, and emotional energy are constant, then each new initiative—no matter how well conceived or well intentioned—will receive fewer minutes, dollars, and ounces of emotional energy than its predecessors.” (Reeves, 2010, p. 27)
As an exercise, write down the list of initiatives and programs started in your local school building or district during the past five years. Next, write down a list of initiatives and programs discontinued during the past five years due to careful evaluation.² Which list was longer?
Odds are the tally of new initiatives and programs is at least double the length when compared to the discontinued list.
“…Teachers who had 80 or more hours of professional development in inquiry-based science during the previous year were significantly more likely to use this type of science instruction than teachers who had experienced fewer hours…The three studies of professional development lasting 14 or fewer hours showed no effects on student learning, whereas other studies of programs offering more than 14 hours of sustained teacher learning opportunities showed significant positive effects. The largest effects were found for programs offering between 30 and 100 hours spread out over 6–12 months.”
(Darling-Hammond & Richardson, 2009, p. 49)
The number of initiatives continues to grow and as the aforementioned research synthesis describes, more effective professional learning requires a commitment over multiple days. Where will schools find the time? Why does it seem like schools are always starting something new?
- school systems have more on their plate than ever before, and
- effective professional learning requires a long-term commitment, and
- the Law of Initiative Fatigue suggests time, resources, and emotional energy are constant…
…what are some next possible steps?
An Initiative Management Framework¹
Rather than thinking only about the new initiatives to start for the upcoming semester or school year, I would like to encourage school leaders to consider a framework based upon the Iowa Professional Development Model and some of Doug Reeves’ work. Here are the steps, followed by a more detailed description of each one.
- Establish a professional development leadership team.
- Curate an initiative inventory.
- Conduct an initiative audit.
- Commit to monitoring no more than six initiatives, at least annually.
Step 1: Establish a professional development leadership team.
The Iowa Professional Development Model Technical Guide describes several purposes of this leadership team:
- To help organize and support various professional development functions
- To engage in participative decision making – a democratic decision making process for keeping teachers involved and informed.
- To help principals sustain a focus on instruction and keep professional development functions going.
- To distribute leadership and responsibility up and down the organization.
This team should involve teachers and administrators. When leadership teams already exit, I have found it helpful to create a visual describing how various teams or committees across the district should interact and be related to each other.
Our “district leadership team” serves as our primary professional development leadership team. This team is comprised of at least two teachers from each building as well as building and district administrators. The primary purpose of meeting four to six times per year for is to plan and monitor our district-wide professional learning activities. For the past several years, we’ve annually re-evaluated a multiple year outline of where we see ourselves going to keep our eyes and minds simultaneously focused on short and long-term outcomes. As the central office administrator overseeing this team, I often come to the meetings with professional learning outcomes and/or a skeleton written in pencil. I’ve found it is more efficient for teachers we’re pulling out of their classrooms to critique my plans than start from scratch during the meeting. The team is asked to evaluate and revise my initial plan against our collective vision and the practical needs of their classroom colleagues. Sometimes, the team rubber stamps my ideas, however we’ve also started over a time or two after throwing out my first draft entirely. I believe our teachers on this committee feel empowered as a result of this process. Our final plans are created and vetted by multiple brains and that’s a good thing.
Hold on…”but we already have leadership teams in our school!!” Now is the time to ask a few questions:
- How often do these teams meet and for what purpose?
- What influence do these teams have on the planning and monitoring or professional learning activities?
- Has this team ever conducted an initiative inventory and audit?
Step 2: Curate an initiative inventory
This step in the framework is important to get right.
- A group of administrators lists the initiatives started during the past five years.
- Ask a group of teachers (on the PD leadership team or a focus group) to list the initiatives started during the past five years.
- Compare lists for the purpose of agreeing on a common initiative inventory.
The first two components of this step can be completed synchronously or asynchronously, however I believe it is important individual teachers and administrators create their lists separately, so that an honest assessment takes place. In my experience, administrators tend to create shorter lists than teachers. Particularly in schools with frequent or recent staff and/or administrator turnover, step three can be an interesting conversation, because it can help teachers and administrators clarify what they both agree are expectations at the moment. That is, until the next step of the process!
Step 3: Conduct an initiative audit
Now that the professional development leadership team has an agreed upon list of the initiatives started during the past five years, it is time to narrow the list to a more manageable list of six or fewer priorities (Reeves). At this point in the process, the team may need some additional information about each initiative such as how it should be presented to the staff and how it should be monitored over time. This may significantly differ from previous practice due to limited resource constraints!
For each initiative listed on the inventory…
- List the number of hours of teacher professional learning time allocated for theory, demonstration, practice and collaboration (Iowa Professional Development Model).
- Identify monitoring/implementation data (teacher perceptions from surveys, walkthroughs, teacher artifacts, etc.)
- Create a hypothetical year-long goal (i.e. “100% of teachers will create a project-based unit this year”)
- Identify monitoring criteria, a.k.a. evidence of successful goal completion.
- Consider the number of hours needed to implement the goal.
The final part of this step is for teachers and administrators to create a prioritized list of initiatives based on 4 & 5. It is important to point out administrators should have the final authority to utilize the prioritized lists to narrow down a final tally of no more than six (6) initiatives. This final “plan on page” must have the stamp of approval from the appropriate personnel who delegate limited financial and time resources throughout the building or district.
Step 4: Commit to monitoring no more than six initiatives, at least annually
One of the best ways I can think of to promote transparency and build trust in a school is via publicly monitoring (and following through with) a small number of building or district priorities. Some initiatives may take multiple years. What does the administration expect of its staff during the year? Communicate it! Check up on it! Celebrate it!
The purpose of this initiative management framework is to help schools consider one way to clearly communicate and support their most important priorities. The steps are relatively straight forward. I’d love to receive your feedback if you use all or part of it in your local context.
- What are the biggest barriers you see in implementing this framework?
- If an outsider walked into your school/district today, how confident are you teachers would be able to recite the goals/priorities for the year?
¹I acknowledge this framework is not something entirely new or original. My hope is that at least one school or district will consider the ideas presented in this framework for the purpose of reducing the number of concurrent initiatives/programs. I strongly believe appropriate and meaningful teacher involvement throughout the process is essential to understanding all perspectives within an educational system.
Nearly 18 months ago, I started a third graduate program and about six months ago, I shared an update on this latest academic pursuit. So far, I’ve completed five semesters of coursework, a combination of research methodology, core doctoral classes and now dissertation research hours.
The past two semesters have involved completing two courses in a three course dissertation mentoring sequence. If I understand the “typical” doctoral program correctly, the mentoring courses are fairly unique to our University of West Georgia Ed.D. program. The dissertation brainstorming and writing is scaffolded across several semesters rather than waiting until all core coursework has been completed. In Dissertation Mentoring I, the focus was understanding the dissertation process and beginning to develop ideas for our problem statements. Dissertation Mentoring II built upon the problem statements and the ended with writing a draft of chapter two, the literature review (more about that in the next paragraph). Throughout these two semesters, we’ve started to communicate with our dissertation committees including one-on-one virtual methodologist consultations. I spent the first three semesters of the program preparing myself for a study along the lines of rural central office leadership, but due to the qualitative nature of the likely research questions and my background teaching high school statistics (very quantitative), I realized (through some very wise counsel from my chair) it made more sense to pursue a topic with more quantitative questions. I’ve “settled” on a topic that will look at high school math and/or ELA grades based on standards and any correlations they might have with college readiness measures. More to come about this topic after my committee has a chance to weigh in on it.
The days starting with Thanksgiving “break” through finals week (last week) were incredibly stressful. As a person who generally needs seven hours of sleep each night, burning the candle at both ends with five hours or less of rest did not have a positive impact on my body. Thankfully, all assignments were completed on time and I was pleased overall to look back on the 26 page literature review draft. In speaking with others who have completed doctoral coursework while working full-time, I am guessing this type of schedule will rear its head in the future, too!
Here’s where I give a big shout out to Bipul and Steve for the support they’ve provided during the past several semesters. Beginning in qualitative research methods, the three of us have completed a number of group projects together. I think we have a three-way text messaging thread with hundreds, perhaps thousands of messages, celebrations, fears, questions about assignments and S.O.S. pleas. I honestly don’t know how I could have survived the program so far without these two “critical friends.” Each of them has provided meaningful feedback on drafts of problem statements and literature reviews. Despite our timezone differences, it has not been uncommon for the three of us to begin texting after one person’s supper and sign off well past another person’s midnight. Through desperation phone calls and celebratory emails, Bipul and Steve have been there to experience the ups and downs of balancing graduate coursework deadlines with full-time jobs and family illnesses.
Looking ahead to the final four semesters
With a bit of perseverance, I am aiming to defend a dissertation in the spring of 2017. Working backwards, I’ve established the following timeline:
- January/February 2016: Submit draft literature review (chapter 2) for review.
- April/May 2016: Revise literature review. Complete draft of methodology (chapter 3).
- June/July 2016: Defend dissertation proposal (draft of chapters 1-3).
- Fall 2016: Collect and analyze data.
- Winter 2016: Write results and discussion (chapters 4-5) and receive feedback from committee.
- Spring 2017: Continue revising chapters based on committee input. Defend dissertation
- Late April 2017: Graduate!
This is admittedly an ambitious goal! I will have fewer volunteer commitments during the upcoming year and the course load each semester appears to be favorable in order to spend a considerable amount of time working on the dissertation chapters. The program director has assured our cohort an extra semester or two to complete the program is very feasible, depending on the time it takes to collect and analyze our data. Outside the scope of this doctoral program, I have been working on several studies with faculty in higher education and am realizing it takes a significant amount of time to write-up a quality study.
This coming semester, I will be taking Dissertation Mentoring III along with research hours. In conversations with my family, I am planning to treat the dissertation like a part-time job, scheduling hours each week to “work” at the local library through writing and reading. The final three semesters will include one course and research hours. With some careful planning, I would like to continue scheduling weekly time to work on the dissertation in order to meet the aforementioned timeline.
Look for another update in a semester or two!
The adoption of a uniform scale of grades as well as a uniform standard in the frequency with which the different grades are assigned is a pressing need among colleges and secondary schools. (p. 636)
Several years later at John Marshall High School:
Our system requires (1) that the mark which is given for scholarship be based on achievement alone; (2) that a uniform distribution be arranged for the school; (3) that in each subject the pupils be grouped so as to approximate this distribution; (4) that marks assigned will approximate the distribution; (5) that ability tests will be given to all pupils to determine their probable learning rates
(Dustin, 1926, p. 29)
On page 30, the results from John Marshall are described:
We regarded failure of 2 percent of the class too low and one of 12 percent too high.
How have things changed, if at all, nearly 100 years later? You can be the judge.
Dustin, C. R. (1926). A scheme of objective marking. Educational Research Bulletin, 5(2), 28-31+40-41.
Starch, D. (1913). Reliability and distribution of grades. Science, 38(983), 630-636.
I subbed for a while in 2nd grade for a while this afternoon. A few quick take-aways:
- It was different than subbing in Kindergarten and preschool (last year’s doses of classroom reality) and a lot different than teaching high school students (six years of trying to get better).
- Flat Stanley is a humorous book.
- The end-of-day routine ensuring students get on the right bus, picked up by parents, etc. may be the most stressful part of the day.
Thanks again and again to elementary teachers. Y’all have a challenging job.
Three siblings and their spouses annually draw names for a Christmas exchange. How many different possibilities are there? (Perhaps it’s not obvious, but married couples would not ever be asked to but a gift for each other in the exchange).
Another way of thinking about this scenario: after how many years are we guaranteed to repeat a previous year’s drawing?
Yet another, “Oh my goodness…can you believe the Common Core is doing this to our kids and families?” news story. After a friend shared this article with me in jest, I thought to myself, “I am a curriculum director, former math teacher and relatively informed educator, do the Common Core standards really prescribe this type of math writing?”
So, I decided to download the Common Core math standards.
I searched for the phrase “check” and did not see any references to the standards prescribing how to write checks. Next, the news story referred to an elementary school, so I read every math standard for grades K-5 (that’s roughly 30 pages of the standards document). I looked specifically for phrases suggesting how students should write, add or count. Here are a few that stood out:
- Count to 100 by ones and by tens. (K.CC.1)
- Write numbers from 0 to 20. Represent a number of objects with a written numeral 0-20 (with 0 representing a count of no objects) (K.CC.3)
- Count to 120, starting at any number less than 120. In this range, read and write numerals and represent a number of objects with a written numeral. (1.NBT.1)
- Fluently add and subtract within 20 using mental strategies.2 By end of Grade 2, know from memory all sums of two one-digit numbers. (2.OA.2)
…and I eventually came across this one:
Read and write numbers to 1000 using base-ten numerals, number names, and expanded form (2.NBT.3)
As you can see, it is evident the Common Core standards do not dictate Xs and Os in place of numbers. Perhaps a math textbook publisher suggested this strategy, however it is clear after reading the standards this type of writing is not a required instructional approach.
As I’ve blogged previously, there’s no such thing as the Common Core police. Until educators begin to discern the differences between the intent of the standards and textbook publishers who freely and with absolutely no regulation stamp “Common Core aligned” on their materials, we’ll continue to see more of these types of uninformed news stories.
Please consider printing this post to hang in your local teachers’ lounge and/or sharing it with friends on social media. Let’s help each other fact check Common Core news stories!