Monday, June 30, 2008

VA and mobile students

I've been trying to pull together how the different districts and states using VA measures for accountability are dealing with Mobile kids. The differences in how the issue is discussed in the three states currently using SAS's EVAAS for state-wide accountability. Bill Sanders has addressed the issue very clearly in his writings around the EVAAS model. The model being used by EVAAS can handle missing observations that are generated by mobile students. In all of the jurisdictions for which I can find documentation, there is a rule for who falls into the accountability system and his therefore included in a particular year's model. Students who do not meet the definition are excluded from the growth model.
  • Ohio Rules - (page 1 of ODE VA FAQ and FAQ at Battelle for Kids)
    "How does high attrition or mobility affect the value-added measure?

    Schools and districts are accountable for students enrolled at that school for a full academic year. Only students who are continuously enrolled from October count week through March testing will be included in the analysis. The Ohio Department of Education will match students test scores across years and schools using the SSID.

    The two FAQs seem to imply that mobile students are included, but only when they meet the above definition. It is possible that some non-zero percent of students fall out of the analysis every year.

  • Pennsylvania Rules - (there are more detail proposed rules here - but they do not differ on this issue)

    It seems that in Pennsylvania, the district - not the school - is accountable for the performance of students who do not attend for a "full academic year". As in the case of of Ohio, the EVAAS system explicitly takes into account missing data for individual children.

    (page 21 of the Pennsylvania Consolidated State Application Accountability Workbook)
    "Schools, LEAs and educational entities are accountable for mobile students in the same manner as they are for other students. The “full academic year” criteria are applied to all students. In Pennsylvania, it is not uncommon for students to move from one school to another within the same district during an academic year. In these instances, the school in which the student is enrolled at the time of the assessment bears responsibility for test administration; however, the district, rather than the school, will be accountable for the student’s performance."

  • Tennessee rules (several districts (pdf page 2 and other external sites) refer to 150 day enrollment requirement before the test )
My concern is for the implications this has for high mobility districts - particularly the large urban settings with high mobility rates. While I can see how techniques for dealing with missing data can be used to make good classroom and grade estimates, there might be a related incentive to not focus as consistently on the learning needs of mobile children - given the pressing needs of those fully included in the accountability system. A model that includes "dosage" or proportional assignment of student growth is what we should be shooting for. This would be relatively easy in a district in which most of the mobility is school-to-school within the district. However, as state-wide data system improve, it should be relatively easy to track student mobile within a state and get access to their test data and current school. A dose-based model does he math right and provides consistent incentives to school staff.

Chris

Friday, June 27, 2008

Accountability in Higher Education

There have been discussions of using standardized tests in undergraduate institutions as a part of an institutional accountability system. A working committee was established by the ational Association of State Universities and Land-Grant Colleges (NASULGC) and the American Association of State Colleges and Universities (AASCU). They developed a framework that is called the Voluntary System of Accountability (VSA). The VSA can be implemented with a series of different examinations (C-Base, CLA, CAAP, MAPP, GRE and ACT WorkKeys). There are currently not enough questions in common across these assessments to support a simple value-added model. There is the notion, however, that VA measures are the goal for the test-based elements in the accountability system.

The piece linked to by the title suggests that this approach follows the recommendations of the Spellings Commission too closely. Tests will not capture the range of what undergraduates are expected to learn across a wide range of subjects. The Association of American Colleges and Universities (AACU) has established the Valid Assessment of Learning in Undergraduate Education (VALUE) project to expand assessment efforts beyond tests to include an e-portfolio approach that would document both growth and breadth of student learning and development.

Chris

Tuesday, June 24, 2008

Obama and McCain on VA and and teacher pay

There don't currently seem to be enormous differences between the two presumptive presidential nominees on student assessment, school reform, and the use of VA results. Both support public charter schools and programs to bring highly qualified teachers to low performing schools. Obama seems to be more open to a wider range of performance measures (including teacher knowledge, observed practices, etc.) than McCain when discussing teacher performance pay. Obama also puts more emphasis on early childhood development efforts.

Chris

Saturday, June 21, 2008

Adaptive tests as the assessment fix for NCLB's narrow approach to testing

In the Value-Added Research Center work with districts and states, we have run into a number of instances of the NWEA MAP being used untested grades and subjects to fill in gap years between NCLB-mandated tests. This approach is particularly appealing in districts engaged in teacher and or school incentive projects. The ability to include more teachers in grade-to-grade growth models is appealing across the board. Administrators like the equity and external validation of external measures. Many educators like the respect given to tested subject and prefer not to have their performance measured by walk throughs or other observational measures only.

On the other hand, there is still great deal of research being done on the validity and reliability of growth models based on computer adaptive tests (over 2300 hits). Student under an adaptive test regime do not take the same form of the test. Much of the science around understanding growth rely on students taking the same form of a test. However, when one looks at the working being done on VA use of adaptive tests on gets 4 hits.

We are hoping to be able to work with one or more districts using the MAP to see how well this works in practice. We are also looking a districts using quarterly diagnostic assessment to predict performance on the annual high stakes. Likewise, we are likely to work with one or more districts who want to use the PLAN-ACT series of tests for measuring high school productivity.

There is certainly plenty of work to do.

Chris

Thursday, June 19, 2008

Colorado Growth Model Introduced

Colorado has been working for some time on a state-wide growth model. In early March, the state issued a press release and made a number of documents available on their web site.
  • Technical Report on a Colorado’s Academic Growth Model (pdf)
  • Presentation to district assessment directors (pdf)
  • Changes to the accreditation process presentation (ppt)
The Technical Report includes both the authorizing legislation and a technical paper by external consultant Damian Betenbenner. Betebenner and his colleagues at the Center for Assessment in Dover, NH are generally on the simpler is better side of student performance modeling recommendations.

I am not an economist, although I do play on on TV. However, I know from personal experience that we've been repeatedly put in the position of evaluating simple systems and the unintended consequences that flow from such models. As hard as one imagines it might be to do growth modeling well, it's probably 2 orders of magnitude harder than one imagines. In particular, what looks good to an outsider looks very different to a teacher or principal whose job performance or bonus is going to be based on that analysis. Educators suddenly discover a preference for complex models when the simpler models turn out to be unfair to some large portion of the adults in a system.

Chris

Wednesday, June 18, 2008

Coloado District Explains new state growth model

A district assistant superintendent does a nice job presenting the new Colorado growth model and how it differs from previous accountability measures.

Chris

Monday, June 16, 2008

UK Education officials stuggle to explain attainment versus growth

Educators and policymakers in the US are not the only folks to struggle with explaining what can appear to be contradictory outcome measures. A recent report identifying failing schools described a wide range of growth performance. In particular, this story points to 30 schools who were in the top 5% on attainment measures (number of GCSEs) but failing on growth.

This is something many people have trouble discussing. The notion of "controlling" for prior ability and demographic characteristics gets confused with expectations. What controls do is level the playing field by making a fair comparison. From a social policy point of view, positive or negative coefficients on for gender, economic status, or race have nothing to do with expectations. They are are the growth equivalents of attainment gaps. If economically disadvantaged children in the 5th grade show on average that their growth in test scale score is 4 points lower than non-poor students, this is the performance gap. This tells us how well we are doing helping students overcome the educational impact of the non-school economic resources. Policy should be focused on improving the rate of learning growth to erase the growth gap.

The challenge presented by interpreting high attainment versus low growth is actually not that hard to overcome. We all know of schools that are good at recruiting families and students with high prior test scores. Many adds for new homes included references to the attainment levels of local schools. High prior school-level attainment (and school magnate programs) tends to attract families with high attainment students. Recruiting students with high prior attainment is the simplest way to be a high performance school under an accountability system that focuses on attainment. A growth model, instead, controls for prior attainment and teases out what learning was delivered in that year. A school can be very good at recruiting while being not very good at challenging good students. The two things are quite different.

Chris

Friday, June 13, 2008

Bill Sanders responds to public criticisim of his VA model

Ed Week covered a dispute (May 6, 2008) between Bill Sanders (SAS Inc.) and Audrey Amrein-Beardsley (Arizona State University). Assistant Professor Amrein-Beardsly published (Ed Researcher) her own analysis of data collected in a study Sanders conducted on the effectiveness of board-certified teachers in North Carolina.

What this exchange confirms is that smart, well meaning people can come to entirely different conclusions. In particular, the argument that a simple model is required for VA to be accepted is completely at odds with our experience that VA models have to be quite complex to be fair. There is no simple answer to the transparency-equity argument. It is a normative paradox that leaves scholars red-faced and exasperated on both sides of the argument.

Chris

Wednesday, June 11, 2008

U. S. Secretary of Education Margaret Spellings Approves Additional Growth Model Pilots for 2007-2008 School Year

The next round of approvals of state Growth Models.
Washington, D.C. — U.S. Secretary of Education Margaret Spellings today announced approval of two high-quality growth models, which follow the bright-line principles of No Child Left Behind. Michigan is immediately approved to use the growth model for the 2007-2008 school year. Missouri's growth model is approved on the condition that the state adopt a uniform minimum group size for all subgroups, including students with disabilities and limited English proficient students, in Adequate Yearly Progress determinations for the 2007-2008 school year. (7thSpace)

Chris


Tuesday, June 10, 2008

New York regional education board provide VA PD

The Capital Region Board of Cooperative Educational Services held a day long session on value added measures on May 28, 2008. Sponsors of the session included New York State School Boards Association (NYSSBA), New York State Council of School Superintendents (NYSCOSS), School Administrators Association of New York State (SAANYS), and Battelle for Kids in Ohio.

I ran across the blog linked to the posting title as I was looking for people around the net explaining VA to others. I am keenly interested in how folk exposed to VA discussions understand them and explain them to others. This is one of the better explanations I've found. It uses a particular form of reporting for its explanation structure - one found in Battelle materials. I'm on the lookout for other approaches such as this year's VA versus this year's attainment compared to last year's attainment with this year's VA. The two graph's tell different stories. Neither is wrong, but they have different purposes. I am hoping that the sophistication of the analysis used in PD efforts broadens to include a wider range of program and school evaluation questions.

Chris

Sunday, June 08, 2008

Houston's VA PD and the criticism of complexity

Whatever I think about the difficulty of training teachers and administrators to understand and use value-added measures, I agree with my colleague Rob Meyer who consistently argues that simple is better, unless it's wrong. I really like the quote from Bill Sanders at the end of the RedOrbit post linked above. Bill use a great teaching aid as well:
"I'm not going to trade simplicity of calculations for the reliability of the information," he said. "Before groups of teachers, I often hold up a cell phone and I say, 'I don't have a clue what's inside this, but I have to have trust that when I punch the numbers, it's going to call the right number.' "
There are two challenges when working with educators on understanding and using VA measures. First, one has to expose how the simplicity of attainment masks its underlying inadequacy as a performance measure. Second, one has to show that the more complex analysis used in VA models is more fair and gives educators credit for improving student learning no matter where the student starts across the range of prior ability.

There is no way to dodge the complexity bullet if we want to be fair to students and educators.

Chris

Friday, June 06, 2008

Houston's Delivers new Value-Added numbers at part of performance pay system

Houston abandoned it's local performance measurement system after data quality and communication problems last year. The Houston Chronicle described the new approach at beginning of this school year. The new program, called ASPIRE, was designed to answer the core problems of the local system - clear rules, transparency around data quality, and equitable access to financial incentives. ASPIRE is a supported by value-added analysis provided by SAS EVAAS and professional development, online support services, and leadership consulting provided Battelle for Kids. This reworking of the HISD approach to performance pay was supported by the Broad and Bill and Melinda Gates Foundations.

We'll need to watch over the next few days and weeks to see how these effort play out.

Chris

Wednesday, June 04, 2008

School board member digs in on tough issues

Wow. A recently elected school board member is really thinking hard about using VA results. Our local school board has been scheduling monthly VA briefings for the assessment subcommittee as we work to develop a statewide VA model for Wisconsin.

We are working to integrate our work with the Milwaukee and Madison school districts to show how the results and analysis differ in large and mid-size districts. We are also working with a regional service agency to explore the best ways to report results to small districts. Small districts have unique challenges for statistical analysis of VA given their low student counts. We will be looking at grouping similar rural districts into "quasi" districts to extract more explanatory power from the models.

Chris

Monday, June 02, 2008

Education Week on Using Value Added Data

Ed Week reported on a meeting at the Urban Institute that was a summary/policy implications round up of a much more technical meeting held here in Madison in late April, 2008.

One of the most important things delivered by the April meeting was a concerted attempt to translate between economists, statisticians, psychometricians, and sociologists. There are is an emerging consensus on the minimum requirements for a value added model used for high-stakes decisions. As the number of scholars engaged in real world analysis converge on a set of recommendations, we should be able to form a more consistent, non-technical explanation of VA model features and assumptions. One of the biggest stumbling blocks facing the wide spread adoption of VA models is the perception that ordinary people cannot understand them. We are getting close to the prerequisites for a coherent set of explanations.

Chris