Minggu, 26 Februari 2012

Reasonable doubt by Aaron Pallas

LINK

I’ve been relatively quiet in the ongoing debate about how best to evaluate teachers in New York City and across New York State. I’m not close to the negotiations and can claim no expertise on the political machinations outside of public view. At its heart, this seems to me a dispute over jurisdiction: Who has the legitimate authority to regulate the work of an occupation that seeks the status of a profession—but one that is in a labor-management relationship?
The laws of New York recognize the labor-management fault line, but they do little to guide a collective-bargaining process toward agreements in the many districts in which teacher-evaluation systems are contested. Each side brings a powerful public value to bear on the disagreement.
For the employers, it’s all about efficiency. It’s in the public interest, they argue, to recruit, retain and reward the best teachers, in order to maximize the collective achievement of students. A teacher-evaluation system that fails to identify those teachers who are effective, and those who are ineffective, can neither weed out consistent low-performers nor target those who might best benefit from intensive help. Rewarding high-performing teachers can, in the short run, help keep them in their classrooms, they claim, and, in the long run, can help expand the pool of talented individuals who enter the occupation.
For teachers, the key concern is fairness. Fairness is primarily a procedural issue: Teachers, and the unions that represent them, seek an evaluation process that is neither arbitrary nor capricious, relying on stable and valid criteria that they believe accurately characterize the quality of their work. In this view, an evaluation process is unfair to the extent that it can be manipulated by a building administrator or school district to yield a particular rating for a teacher’s performance. It is also unfair if random factors beyond a teacher’s control unduly influence the evaluation of his or her performance.
The values of efficiency and fairness collide head-on in New York’s Education Law §3012-c, passed as part of the state’s efforts to bolster its chances in the 2010 Race to the Top competition. The law requires annual professional performance reviews (APPRs) that sort teachers into four categories—“highly effective,” “effective,” “developing” and “ineffective”—based on multiple measures of effectiveness, including student growth on state and locally selected assessments and a teacher’s performance according to a teacher practice rubric.
The fundamental problem is that it’s hard to assess the efficiency or fairness of an evaluation system that doesn’t exist yet. There are too many unknowns to be able to judge, which is one of the arguments for piloting an evaluation system before bringing it to scale. The properties of the state tests that are to be used to assess teachers’ contributions to student learning are a moving target; the tests have been changing in recent years in response to concerns about their difficulty, predictability and coverage of state curricular standards. And in a couple of years, those standards and assessments will change, as New York and many other states phase in the Common Core standards and new assessments designed to measure mastery of them. The models to estimate a teacher’s position relative to other teachers in contributing to students’ test performance are imprecise at the level of the individual teacher, and different models yield different results for a given teacher. There’s been little to no discussion of how to incorporate this uncertainty into the single numerical score a teacher will receive.
The evaluation of teachers’ practices via classroom observations using New York State Education Department (NYSED)-approved rubrics, such as Charlotte Danielson’s Framework for Teaching or Robert Pianta’s Classroom Assessment Scoring System, is another unknown. There’s evidence that with proper training, observers can reliably rate teachers’ classroom practices, but the nature of the training is critical, and there is no evidence to date of New York City’s ability to prepare more than 1,500 principals, or the principals’ “designees,” to carry out multiple observations of many teachers, teaching many different school subjects, each year.
Amazingly, there is even uncertainty about whether the evaluations can or should be based solely on a teacher’s performance in a single year. The statute creating the new evaluation system in New York describes it as an “annual professional performance review.” But is this a professional performance review that occurs annually, or a review of annual professional performance—that is, a teacher’s performance in the most recent year? The guidance provided by the NYSED suggests that it has no idea. “For 2011-12, only one year of teacher or principal student growth percentile scores will factor into each educator’s evaluation,” the guidance states. “When more years of data are available, NYSED will consider whether each evaluation year should include more than one year of educator student growth results. Empirical and policy considerations will determine the decision.”
Well, that certainly clarifies matters. In other words, a “bad” year where a teacher is ranked relatively low compared to other teachers might reverberate, affecting his or her ranking in subsequent years. But a good observational rating in a given year seemingly will have no spillover effect into subsequent years. If, as has been true in Washington, D.C.’s IMPACT teacher-evaluation system, teachers generally score higher on observational ratings than on their value-added or growth-score rankings relative to other teachers, the carryover for value-added performance—but not observations of teachers’ professional practices—appears unfair. And in D.C., this evaluation system has resulted in the termination of hundreds of teachers based on one or two years of performance.
Teacher-evaluation systems have multiple purposes, which might include certifying teachers as competent or selecting some for particular forms of professional development to enhance their professional practice. For most of these purposes, it’s essential that those with a stake in the education system view these evaluation systems as legitimate—and the perceived efficiency and fairness of an evaluation system are central to such judgments. It’s not hard to see why a great many teachers, in New York City and across the state, have serious doubts about the fairness of New York State’s APPR process. And if future teachers do as well, the process could have the unintended consequence of reducing, rather than increasing, the pool of individuals willing to consider teaching as a vocation. This, coupled with the more than 1,300 principals across the state who have raised questions about the efficiency of the process, illuminates the challenges confronting the state as it seeks to implement the APPR system and avoid a scolding from U.S. Secretary of Education Arne Duncan.
William Blackstone, an 18th-century English legal scholar, wrote “better that ten guilty persons escape than that one innocent suffer.” Benjamin Franklin, one of the founders of our country, later upped the ante to 100 to one. The principle captures squarely the trade-off between the value of efficiency and the value of fairness. A legal system that lets the guilty go free is inefficient, as these offenders are free to continue to transgress against the common good. But to Franklin and others, that was still preferable to a legal system that did not provide adequate procedural protections for all, whether innocent or guilty, because such a system would be inconsistent with the principle of fairness so central to the American polity.
It’s important to note that Blackstone and Franklin were concerned with the workings of government; fairness in the private sector was not a central concern, and efficiency was taken for granted as a consequence of market forces. Civil servants, as agents and employees of the state, arguably are subject to a different set of rights and responsibilities than those working in the private sector, and teachers are one of the largest groups of such public servants. What’s an acceptable tradeoff between efficiency and fairness in the mix of teachers’ rights and responsibilities? It’s a lot easier to speculate about percentages in the abstract than to confront the possibility that you, or someone close to you, might be out of a job because of an untested teacher-evaluation system that cuts corners on fairness.

COMMENTS & TRACKBACKS (2) | POST A COMMENT

LAURENCE MECHANIC
You say it best when you discuss efficiency. Efficiency is the antithesis of effectiveness, and therefore, the teacher evaluation process is inherently flawed. As a teacher, my only hope is that the general public–whose hard earned money is funding public education–truly begins to understand the problems associated with mixing a business mentality with education.
[...] I’ve been relatively quiet in the ongoing debate about how best to evaluate teachers in New York City and across New York State. I’m not close to the negotiations and can claim no expertise on the political machinations outside of public view. At its heart, this seems to me a dispute over jurisdiction: Who has the legitimate authority to regulate the work of an occupation that seeks the status of a profession—but one that is in a labor-management relationship? The laws of New York recognize the labor-management fault line, but they do little to guide a collective-bargaining process toward agreements in the many districts in which teacher-evaluation systems are contested. Each side brings a powerful public value to bear on the disagreement.

Tidak ada komentar:

Posting Komentar