Wednesday, May 16, 2012

A Loss for Cyber-Schooling or Just a Regrouping?

On Monday, May 14, Arizona Republican Governor Jan Brewer vetoed SB 1259, the bill written by ALEC and K12 Inc. lobbyists that would have greatly expanded the give-away of Arizona taxpayer dollars to outside profit making corporations. In short, SB 1259 would have required every school district in the state of Arizona to provide a minimum of two online courses per year to any grade 7 – 12 pupil requesting such. The company supplying the cyber-courses would be reimbursed at 100% of a pro-rated per pupil expenditure. Full-time online students in Arizona currently number nearly 40,000, most of the money flowing through online charter schools to K12 Inc. SB 1259 would have opened up a market for the corporations many times larger than the full-time charter market.

What made SB 1259 fly through the Arizona Legislature and onto the Governor’s desk was an accountability provision promising to guarantee mastery of course content. Each online course would have to be accompanied by a final exam that was matched in difficulty to the state-developed AIMS test. Presumably, after sufficient psychometric magic had been performed, the new course-level final exam would “pass” no one who would not “pass” some associated AIMS exam. But this accountability provision was nothing but window dressing. A standardized test is a test that is administered under standardized conditions. Those taking it are allowed the same amount of time to finish and are treated equally in terms of accessories (i.e., one student may not be administered the exam without the aid of a smart phone while another is allowed to consult the internet for help in answering a question.). This obvious condition to impose on any test that deserves the name “accountability” could be met by sending all online students to a testing center to have their exam proctored. A private company could even set up testing centers all around the state and make itself a handsome profit (off of money that might otherwise have been sent to K12 Inc in Herndon, Virginia). Instead, modifications to SB 1259 before it reached the Governor’s desk stipulated that the final exam in the online course would have to be taken by the student in the presence of a “non-family member.” This feeble stab at instituting an accountability measure is simply laughable.

So why did Brewer veto SB 1259? The ostensible reason was that the Governor considered it inappropriate for the state "or an entity on behalf of the state [to approve] online courses or curriculum." This refers to a “master list” of courses that would have been created by the Arizona Department of Education for delivery to students through the online program. Courses currently offered by online providers (read “K12 Inc.”) would be “grandfathered” onto the master list.

What might be the real reasons that Brewer vetoed the bill?

The development costs for creating all those final exams and linking them to the state’s AIMS test could have proved too costly for either the state or the outside companies. It is unlikely that the accountability provisions in the final drafting of the bill originated with its sponsors; they probably were added by Democrats in committee. K12 Inc. might have wanted the Governor to kill the bill so that ALEC and their lobbyists could take another run at it in a future session.

Another reason for scuttling SB 1259 could involve the new State Superintendent of Public Instruction. A technophile politician by the name of John Huppenthal recently replaced a non-educator political opportunist who ran successfully for State Attorney General. Huppenthal, a former systems analyst for one of the state’s major utility companies, may have ambitions of his own for developing a state-owned cyber-schooling capability. K12 Inc. has attempted to kill off state-operated cyber-coursework in other states (Arkansas, Tennessee) to protect and expand its market. Brewer’s veto could signal a battle between states and corporations for control of the cyber-schooling market nationwide. If so, it will be a battle where students and taxpayers lose, no matter what the outcome.

Gene V Glass
University of Colorado Boulder
Arizona State University

Monday, May 7, 2012

Houston, You Have a Problem!

Education Policy Analysis Archives recently published an article by Audrey Amrein-Beardsley and Clarin Collins that effectively exposes the Houston Independent School District use of a value-added teacher evaluation system as a disaster. The Educational Value-Added Assessment System (EVAAS) is alleged by its creators, the software giant SAS, to be the “the most robust and reliable” system of teacher evaluation ever invented. Amrein-Beardsley and Collins demonstrate to the contrary that EVAAS is a psychometric bad joke and a nightmare to teachers.

EVAAS produces “value-added” measures for the same teachers that jump around willy-nilly from large and negative to large and positive from year-to-year when neither the general nature of the students nor the nature of the teaching differs across time. In defense of the EVAAS one could note that this is common to all such systems of attributing students’ test scores to teachers’ actions so that EVAAS might still lay claim to being “most robust and reliable”—since they are all unreliable and who knows what “robust” means?

Unlike many school districts which have the good sense to use these value-added systems for symbolic purposes only (“Look at us; we are getting tough about quality.”), Houston actually fired four teachers (three African-American, one Latina) based on their EVAAS scores. Houston fired Teacher A partly on the basis of EVAAS scores that looked like this:

EVAAS Scores for Teacher A by Year & Subject
Subject2006-20072007-2008
Math–2.0+0.7
Science+2.4–3.5

The above scores are just a representative sample of the wildly unreliable scores that Teacher A accumulated over four years in several subjects.

As if this pattern did not alone exonerate Teacher A, her supervisor’s rating of her performance based on classroom observations were highly negatively correlated with the EVAAS scores. Amrein-Beardsley & Collins report that 1) teachers insisted that their teaching methods changed little from year-to-year while their EVAAS scores jumped around wildly, and that 2) principals reported having been pressured to adjust their supervisory ratings of teachers so that they were in agreement with the EVAAS scores. After all, did the administration want to admit that they had spent a half-million dollars of an elaborate mistake?

The whole Houston story as reported by Amrein-Beardsley & Collins is gruesome in the extreme, and I recommend that you read it in its entirety. For me, the story sparked recollections of disasters from thirty years ago that I chronicled in a chapter in a book edited by Jason Millman and Linda Darling-Hammond. (Glass, Gene V. (1990). Using student test scores to evaluate teachers. Pp. 229-240 in Jason Millman & Linda Darling-Hammond (Eds.), The new handbook of teacher evaluation: Assessing elementary and secondary school teachers. Newbury Park, CA: SAGE Publications.) You can read the entire chapter here.

In the mid-1980s, I was able to find six school districts in the entire country that claimed to have based teacher compensation on the test-score performance of their teachers. Each of the six showed the same pattern of behaviors that I summarized thus:

    Using student achievement data to evaluate teachers...
  1. ...is nearly always undertaken at the level of a school (either all or none of the teachers in a school are rewarded equally) rather than at the level of individual teachers since a) no authoritative tests exist in most areas of the secondary school curriculum, nor for most special roles played by elementary teachers; and b) teachers reject the notion that they should compete with their colleagues for raises, privileges and perquisites;
  2. ...is always combined with other criteria (such as absenteeism or extra work) which prove to be the real discriminators between who is rewarded and who is not;
  3. ...is too susceptible to intentional distortion and manipulation to engender any confidence in the data; moreover teachers and others believe that no type of test nor any manner of statistical analysis can equate the difficulty of the teacher's task in the wide variety of circumstances in which they work;
  4. ...elevates tests themselves to the level of curriculum goals, obscuring the distinction between learning and performing on tests;
  5. ...is often a symbolic administrative act undertaken to reassure the lay public that student learning is valued and assiduously sought after.

Most of what I saw in the mid-1980s is true today and is true of the present-day EVAAS system in Houston. Regrettably, point #5 is not so true. Not content to use these systems as mere symbolic window dressing, Houston has actually fired teachers based on their students’ test scores. Is HISD the bellwether of a dawning scientific age? Is it the district with the courage of its convictions? Should the nation look to Houston for leadership in insuring that teacher evaluation must be hard-headed and results-based?

Well, coincidentally, Houston was one of the six school districts that I investigated for my chapter in the Millman & Darling-Hammond handbook. And here is what became of Houston’s early-day effort to reward teachers for their students’ test-score gains.

Rod Paige, who eventually became Secretary of Education for Bush 43, became superintendent of HISD in 1994. But Houston had had a system of teacher incentive pay based on student test scores before Paige ever arrived. Since Paige was also an officer of the HISD from 1989 to 1994 and had co-authored the districts “Declaration of Beliefs and Visions,” his influence may have been responsible for the teacher incentive pay system that antedated his superintendency.

Teachers in Houston elementary schools were being given monetary bonuses—not base salary increases—when the entire school reproduced the previous year’s average test plus an additional increment, say, last year’s growth + 2 months grade-equivalents. The bonuses amounted to a few hundred dollars for each teacher. After a couple years in which the bonuses effectively replaced cost-of-living increases in the salary schedule, the test score gains were bumping up against the ceiling of the test. One or two schools missed their bonuses and tensions were rising.

In the meantime, the flow of money did not go unnoticed by the building administrators. Now principals are “instructional leaders,” or so the story goes. How they find time (or the expertise) to lead teaching while insuring the safety of students and staff, enforce discipline, direct traffic, and field complaints from angry parents is a mystery to me; but perhaps I don’t know what an instructional leader really is. So the principals banded together and approached the HISD administration and asked for their reward for making the test score gains. Only this time, the rewards were $10,000, $12,000, and sometimes $15,000—we’re talking 1985 dollars here too. Within a year or two, a couple of building principals were discovered having taken the test answer sheets into their offices and engaged in some erasing and marking. The entire system blew up and status quo ante was reinstituted.

Are things different now? Has some genius or some software company come up with a new system that is truly “robust and reliable”? And has a system been found that teachers and administrators acknowledge is legitimate and fair so that they will not be tempted to take whatever steps might be necessary not to become victims? And when will we see the value-added system that can be applied to politicians and school board members or even to researchers who invent value-added measuring systems? Or, as such persons regularly argue, is the value of their work so much more complex than that of a teacher of young children that it could not possibly be captured by a clumsy quantitative index?

Gene V Glass
University of Colorado Boulder
Arizona State University