Ron Smith is a freelance writer, researcher, and consultant, with a deep education background. He has previously been a high school science teacher and has served as Curriculum Assistant Principal, district Curriculum Director, and CFO/CIO and Director of Student Assessment in Lake Oswego. Ron has also worked as a research specialist for the Northwest Regional Educational Laboratory and as a training manager for Intel.
Two recent events provide the opportunity to revisit a couple of recurrent themes in my blogs. The first of these events was the release of the NAEP science results from the 2011 administration. The results were predictable – no significant growth in the number of students achieving at the proficient level. Only about a third of all students tested performed at this level or better. Sound familiar?
The same general pattern was apparent when I reviewed the results from the last NAEP reading and mathematics assessments. And the reasons are also the same. There are no “breakthroughs” in NAEP results because this snapshot assessment program tells us as much about student ability as it does about student achievement. Not all students of a given chronological age will reach the same achievement levels at the same time – particularly when the resources invested per student are essentially uniform. The number of hours of instruction per day, week and year are broadly equivalent across the United States and students are focused on the same relatively narrow curriculum. (more…)
Current education reform efforts are spread over many different points of emphasis. Prominent among these is the effort to improve teacher quality. By itself, improving teacher quality is a multifaceted, complex program of innovations, including attracting more high performers to the profession, increasing the rigor of teacher education programs, differentiating workplace roles, and varying compensation based on performance. A central pinch point in achieving these goals is teacher supervision. It is a pinch point because all the elements of improving teacher quality rely on teacher feedback that is relevant, accurate, credible and fair. Historically, delivering this kind of feedback has been difficult and largely unrealized.
In thinking about teacher supervision, let’s first consider context. According to the National Center for Educational Statistics, the average public elementary school in the United States serves about 500 students. At a student-teacher ratio of 30:1, about seventeen regular classroom teachers would staff a school this size. In addition, let’s assume that the school has no specialists other than one special education teacher for a total of eighteen professional staff. Let’s work with this configuration as our prototype as the same organizational principles related to teacher supervision scale up or down pretty well for larger or smaller schools. The same principles apply to secondary schools as well, though with more complications due to more differentiated staffing models.
Of the eighteen teachers in our prototypical school, three or four are likely to be master teachers, one or two are likely to be struggling, three or four are likely to be marginally effective, and three or four are relatively new to the teaching profession. Everyone else is meeting expectations pretty consistently. In this school, like most others, there are a variety of performers and a variety of needs for improvement. That’s life. (more…)
Dr. Mike Schmoker’s most recent book, Focus: Elevating the Essentials to Radically Improve Student Learning has some key messages worth serious consideration. He argues persuasively for attending first and foremost to the improvement of curriculum and instruction – at the exclusion of everything else. And, he asserts, if we focus on what matters most, we can rapidly improve student achievement across the board.
Here are his key messages:
- The curriculum that is actually taught is the one that matters. The scope of the written, adopted curriculum (often expressed as standards) is far too broad and often littered with low value targets. Grade level teams of teachers should work to reach professional agreements on a limited set of “power” learning outcomes – and then all teach to them with no exception.
- We know how to teach the curriculum. We don’t have to wait for the discovery of effective techniques. Effective instruction is not mysterious or even especially difficult to implement. Every teacher in every classroom in every school needs to focus on the basics of instruction until they become routine and automatic.
The goal of determining how much a teacher or school contributes to student academic achievement growth is a complicated and difficult aspiration. Under ideal conditions, reasonable estimates can be theoretically determined. But, the real world is far from ideal and the risk of classification errors is high.
A classification error occurs when a student, teacher or school is incorrectly assigned to a performance category. For instance, a school may be labeled as exceeding expectations for achievement growth, when, in reality, it only meets expectations – or vice versa.
Since many decisions ranging from public disclosures to employee compensation are at stake, we need to pursue the best VAM available and fully explain the level of uncertainty that goes with each rating. And if the uncertainty is too great, decisions should be deferred.
The following was emailed to Oregon’s Superintendent of Public Instruction, Susan Castillo, on 11/07/2011:
Hi Susan – I know you’ve reviewed the most recent NAEP results as have I. The distribution of reading achievement scores for grades four and eight remained essentially unchanged as they have for roughly the last two decades. How can this be? For the last decade, in particular, on a nationwide basis we have spent billions of dollars trying to improve reading achievement. We have spent lavishly on special education, the latest curriculum programs, response to intervention strategies, early childhood literacy programs, staff development programs, technology-based remedial programs – and yet achievement has not improved. Again, how can this be?
The answer is surprisingly straightforward.
In the NEAP results we are seeing the intersection of two controlling variables, differences in cognitive ability among students and the standardization of access to learning.
If you administered a high quality cognitive ability assessment to the same students who took the NAEP reading exam, you would see that the results map to each other to a very high degree. Lower ability students present lower reading achievement and higher ability students present just the opposite.
But if you also overlaid the time provided for learning to these same students you would find it almost identical for all levels of ability – about 6 hours per day for about 180 days per year.
Ability varies (as it always has), yet instruction time is about the same (as it has been for decades). More than three quarters of the variance in test scores can be explained by these factors alone. (more…)
In my last blog, I explained why international comparisons of student achievement like the Program of International Student Assessment (PISA) provide an inadequate basis for justifying education reform. At the end of that blog, I suggested that there are other data sources that challenge us to think about a range of changes to public education. I now offer three data-driven rationales for reform.
The three data sets justifying serious consideration of education reform are these: (1) cohort dropout rates, (2) changes in workforce requirements, and (3) dramatic recent changes in the scope and content of the human knowledge base. Let’s consider each of these in order.
The cohort dropout rate describes the percent of students of each high school class who graduate on schedule at the end of the senior year, regardless of when a student leaves school. This statistic has drawn recent interest, as a result of the current ESEA regulations that require states to report cohort dropout rates at the state and school district levels.
The results are of concern, though they have been long recognized by educators. In Oregon, the state cohort dropout rate is about 34 percent, with a range of district rates from 14 percent to 66 percent (for districts with a least 100 students in the cohort). On a national level, the rate is estimated at around 30 percent, though we should be cautious in believing that this statistic is accurate. The national data set is compiled from state data and it is unlikely that reporting standards are identical in every state (though federal regulations should theoretically ensure consistency).
Considered independently, the cohort dropout rate is distressingly high. (more…)
Over the last several years, critics of public education in the United States have regularly turned to data provided by the Europe-based Organization for Economic Co-operation and Development (OECD) through its student assessment initiative, the Program of International Student Assessment (PISA). (Two other international assessment programs similar to PISA have also been implemented. Trends in International Mathematics and Science (TIMSS) is administered to a sample of 4th and 8th graders every 4 years, including 2011. Progress in International Reading (PIRLS) is administered to a sample of 4th graders every five years, including 2011. The methodologies employed in all three assessments are similar, so comments I make regarding PISA generally apply to the other assessment programs as well.)
Every three years, PISA administers a common assessment to a sample of 15-year-old students in participating countries. In the most recent 2009 cycle, PISA assessments were administered in 65 countries/economies. Each assessment surveys student achievement in three domains: (1) reading literacy, (2) mathematical literacy, and (3) science literacy, with one of these being the primary focus. For the 2009 cycle, the focus was reading literacy with questions in this domain comprising about 60 percent of the assessment.
From these assessment data, individual country profiles describing student achievement are prepared along with various reports seeking to compare achievement across participating countries/economies. The comparison reports have been popular within the United States as a basis for criticizing public education and justifying the call for education reform. Based on average test scores for 2009, the United States ranked 17th in reading literacy, 30th in mathematics literacy, and 23rd in science literacy. These “low” rankings must signal a problem, right? As we shall see, these ranking may or may not be correct, and even if they are, more analysis is needed to understand their significance. Simple rank order displays rarely reveal much about the complexities of student achievement.
Children vary in cognitive ability. This is readily apparent in schools. We have long spent time assessing cognitive ability and developing programs to improve learning outcomes for those in general ability ranges (special education and TAG programs being notable). Yet the impact of cognitive differences on learning outcomes is rarely, if ever, taken into account by education reformers. This is troubling because over half of the variance in achievement among students of the same age is attributable to differences in cognitive ability.
Cognitive ability differences translate directly to academic achievement through variation in the ability of students to benefit from instruction. Lower ability students are more prone to misconceptions and are more likely to need more stage setting, more structured (scaffolded) skill development, and more skill practice to achieve mastery. In addition, they may need more examples to consolidate concept learning, more periodic and structured review to strengthen long term memory, more problems of escalating difficulty to reach desired levels of application, and generally need more frequent and precise assessment feedback. Instruction, if it is to be effective, must attend to these issues. But the consequence of these various learning challenges is that the rate of mastery of core concepts and skills is slowed. And without quality instruction, progress can stall altogether.
Higher ability students, on the other hand, generally need less staging—they already have the pre-requisites in hand, master skills and concepts on the first try, commit things to memory readily, and can handle sophisticated application problems without the need for intermediate levels of difficulty. They reach mastery with greater ease, more quickly.
As a consequence of these different orientations to learning, students diverge from each other over time in terms of achievement, even when they are exposed to the best quality instruction. Differences in achievement are inevitable, particularly when the learning resources available to students are roughly the same. And resources available through public education—especially time—are roughly the same for all students.
I have recently completed a research report that discusses the relationship between cognitive ability and achievement from an empirical perspective. I also discuss some of the implications for standards-based school reforms. You can access the report here.
In a response to one of my earlier blog posts, a reader wondered whether teacher compensation was out-of-line with the private sector. The reader’s query was a good one and likely shared by many others, judging from recent media reports.
In an effort to provide some informed perspective, I have prepared a short analysis of teacher compensation in Oregon which can be found here. Based on my experience, the picture I paint is pretty typical for teachers in our state, though there is substantial variation from school district to school district due to our long tradition of local control and independently negotiated employment agreements.
In preparing my analysis I had several goals: (1) defining the occupational status of teaching, (2) framing compensation in the context of the teacher workplace, (3) clearly describing the various elements of teacher compensation, (4) identifying the relevant private sector peer group, (5) clarifying the scope of compensation in both the public education and private sector worlds, and (6) drawing meaningful compensation distinctions and comparisons.
While the current economic downturn has increased attention on public sector compensation issues generally, teacher pay in particular, continues to generate perennial debate. I hope that the information I have provided will facilitate this discussion.
Our overarching goal to raise student achievement cannot be fully met without attention to teacher compensation issues.
Read the short analysis.
There are many proposals for reforming education. And new proposals continue to appear regularly. Over the past several months I have tried to sift through dozens of proposals and integrate the most important of these, those most likely to produce results, into a coherent framework. In developing this framework I tried to improve our fundamental understanding of public education in the United States and to clarify the purposes of education reform.
I then organized a limited number of “high leverage” improvement ideas into three themes: teaching and learning, education infrastructure, and accountability. Next, I attempted to show how these parts fit together as a coherent whole. Finally, I considered the policy changes needed to implement education reform.
I have argued that policy makers at many levels should work together to establish common purpose, focus attention on what matters most, and sustain a strategic effort over time. I’ve also asserted that substantial progress can made using the resources already available and that meaningful work can commence immediately.
If the topic of education reform interests you, you can find my monograph here.
You’ll note that the Chalkboard Project’s current emphasis on teacher quality issues is strongly supported by my own research. Over the long term, work in this area is essential to improving student achievement and creating meaningful accountability.
I consider the monograph a work in progress. Consequently, I welcome feedback based on all points of view. I intend to revise it periodically based on suggestions for improvement and the availability of new evidence.
I hope you find my proposal interesting and that you’ll join me in an ongoing discussion.