Posts Tagged ‘
test critera ’
A new report suggests that Oregon could benefit from significantly changing its school and district accountability system. The report, commissioned by the Chalkboard Project, Stand for Children, OBA, and the Confederation of Oregon School Administrators and prepared by Education First Consulting, recommends that Oregon overhaul the indicators used and reported in its current accountability system to include a richer set of information that suggests how well schools are helping students prepare for college and careers.
The report suggests that states with successful accountability systems communicate results effectively, provide meaningful resources to interpret and use accountability results, and base their systems on rigorous college- and career-ready expectations. The authors of the report synthesized promising practices in state accountability systems and compared those identified promising practices to Oregon’s accountability system.
Additional recommendations for Oregon include revamping and streamlining the state’s reporting system, including considering the reporting timeline, the number of reports, and the usefulness of the data to inform instruction and decisions, and exploring the use of incentives to motivate schools and districts to continually improve or to maintain success. The report also recommends that Oregon improve its measurement and use of student growth scores, and suggests adopting the Colorado Growth Model.
Download the full report. For more about Education First Consulting, see www.educationfirstconsulting.com.
After the LA Times published effectiveness rankings of 4th and 5th grade teachers in the Los Angeles School District earlier this year, there has been much public debate over the use of value-added models (VAM). A VAM is intended to be a statistical analysis of a teacher’s effect on student achievement, taking into account a student’s past performance and expected academic growth. While discussions of VAM are not new to educators or policy wonks, a group from the Brown Center for Education Policy at the Brookings Institute recently released a report on some of the questions and concerns surrounding VAM.
The report, “Evaluating Teachers: The Important Role of Value-Added,” was produced by the Brookings Brown Center Task Group on Teacher Quality. The task group included: Steven Glazerman, Mathematica Policy Research; Susanna Loeb, Stanford University; Dan Goldhaber, University of Washington; Douglas Staiger, Dartmouth University; Stephen Raudenbush, University of Chicago; and Grover J. “Russ” Whitehurst, The Brookings Institution.
Here are some highlights from the report:
- Whether value-added information should be a component of teacher evaluation is a different question than how teacher evaluations impact human resource policies and decisions.
- Much of the concern with VAM is over the fear that an effective teacher could be misclassified as ineffective; yet, in many other professional fields, we readily accept that evaluations are not 100% fool-proof and that imprecise measures are often used to make “high stakes decisions that place societal or institutional interests above those of individuals.”
- “…the interests of students and the interests of teachers in classification errors are not always congruent…” While there is rightfully concern over effective teachers being misclassified as ineffective, we also need to weigh this against the consequences for students of labeling ineffective teachers as satisfactory.
- “…all decision-making systems have classification error. The goal is to minimize the most costly classification mistakes, not eliminate all of them.”
- Rather than holding an unrealistic standard of perfection for teacher evaluations, we should compare value-added models to other forms of teacher evaluation and classification. (more…)
Traditionally, grades have been interpreted as C means average, B means above average, A means excelling, D means below average, and F means failing. Yet no student of mine in fourteen years of teaching believes this. My students view B as average, A as above average, C as below average and D/F as failing.
Furthermore, I’m unsure whether most students know what it means to excel. Most are accustomed to earning As for simply following instructions. It’s not uncommon for a student to ask me why an essay was scored a B, when they listed all the requested information. I’ll reply yes, you listed the information, but you didn’t explain the information, support the information, demonstrate that you truly understand the information. In other words, you met the minimum criteria, but you didn’t surpass them. Often I receive a blank stare in response.
It seems that our students are receiving increasingly better grades, and not necessarily working harder or smarter to earn them. A 2005 study by the organization that administers the ACT test concluded, after analyzing the GPAs and ACT test scores of 800,000 students per year over 13 years, that grades had inflated over 12% over that time period, meaning a student that scored a 20 on the ACT in 2003 had a 12% higher GPA than a student that scored a 20 on the ACT in 1991.
If grade inflation exists, if we instructors are assigning students ever higher grades, then we may be doing them a disservice. They may be learning that top marks are not hard to come by, and that’s certainly not going to motivate them to become the next great innovators and problem solvers our world needs.
I’m not suggesting teachers simply need to grade students harder. In truth, I wish we didn’t have to “grade” students at all. I wish, instead, that we could simply provide students and their families meaningful qualitative information and data to monitor and promote learning and growth. But as long as we do have grades — as long as colleges and communities look to grades, regrettably, as the sole barometers of student achievement — then we owe it to students to hold them accountable to solid standards and evaluate their work accordingly, and resist pressure from students, parents and administrations to grant favorable grades. That means when a student and/or parent asks for extra credit assignments at the end of a semester for the sole purpose of boosting scores, we should reply no, and let scores reflect actual performance.
As the State Board talks about adopting common core standards that a majority of other state have taken on, I find myself wondering about how we determine our progress in meeting those standards. It seems to me that if these states all have the same standards, they should all use the same ruler by which to measure. Makes sense for apples-to-apples comparisons of how school districts are doing across the nation. Yet, I have only just leaned it is not that simple.
Summative vs. formative testing seems to be an issue, as is what some perceive as the cost of implementing new standards, curriculum for standards, and assessments of meeting standards. The later argument does not work with me – we cannot use cost as a reason to not evolve, and, frankly, Oregon schools adopt new standards regularly and already pay for assessments of some sort anyway. It is the former issue that I need your help with.
It seems to me that a summative test at the end of a school year should be used to show how, in general, a school did getting a grade level to meet the standards adopted. This summative test would be the same in every state. The formative tests, however, not only should be state specific but even grade, subject and teacher specific. The teachers should choose the formative tests that they believe will best enlighten them on how to teach the class and where it needs improvement. If the teacher’s goal is to do well on the summative test, shouldn’t they be given the power to figure out how to get there?
Cant’ wait to hear your thoughts on this……
The stack of state-writing-test booklets stare at me from the corner of my desk, flaunting their power. I scowl back.
Picking up the top one to glance over prompts and format, I freeze when my eyes catch the scoring chart: 6 boxes, a numerical judgment of the student’s mastery in each of the six writing traits—ideas, voice, word choice, organization, sentence fluency, and conventions. But wait, what is this in fine print? Voice and word choice do not count?! Conventions are counted twice? No. There has to be some sort of mistake. What exactly are we expecting from our young writers? What message does this send them? No voice… No word choice… Says who? (more…)