Tuesday, January 31, 2006

Examples of big payoffs available to program improvements

Current teacher turnover levels costs Illinois districts $224 million a year. This is the sort of number that a really solid evaluation plan could turn into a point of leverage. CPS would be an ideal place to do some quasi experiments on the efficacy of different induction and retention programs.

Chris

Sunday, January 29, 2006

The State's Role in School Improvement

The Education Commission of the States provides a policy brief School Restructuring Via the No Child Left Behind Act: Potential State Roles that does good job in laying out a range of strategies for states that include fairly hands-off supporting policies to very activist models including school takeover.

Chris

Friday, January 27, 2006

Data Quality Campaign and its 10 Criteria

What a list. Some of this is going to be very hard to accomplish.
  1. A unique statewide student identifier
    • Mobility makes this tough. State to state transfers increase the difficulty enormously
  2. Student-level enrollment, demographic and program participation information
    • Program participation remains tough to follow. Much of the money flows down to schools. Without the data processing capacity to track the relationship between teachers, students, and courses - something most district lack - this is impossible. Many programs (state and federal) require only counts by school and grade. There has never been an incentive to track individual student participation - with a few exceptions such as special and vocational education.
  3. The ability to match individual students’ test records from year to year to measure academic growth
    • Many states are now doing this. NCLB was a wakeup call. Still, many states are brand new at this and will take several years to build up useful longitudinal data.
  4. Information on untested students
    • Ideally many distrticts and states are going a step beyond this with induction testing for children who move into the district outside of the high stakes testing window. It is vitally important to make sure that all student are tracked and that schools are held accountable for all kids.
  5. A teacher identifier system with the ability to match teachers to students
    • Without this link it is impossible to provide teachers with the characteristics of their incoming kids. This like is also necessary if one is to study the effectiveness of a particular curriculum given varying levels of teacher experience and/or training. The tie between teacher and student is at the core of the "production function" of education.
  6. Student-level transcript information, including information on courses completed and grades earned
    • This data documents mastery of required content and shows students' opportunity to learn at a system level.
  7. Student-level college readiness test scores
    • While this seems like a good idea, it is tough to administer tests with no stakes attached. Many districts have attempted to use exit exams. There are mixed results.
  8. Student-level graduation and dropout data
    • This remains a serious measurement problem. Dropouts are the measurement of a non-event - a student no longer attends school. We don't know what happened - only that we don't see them any longer. This makes it tough to fix the problem. We don't know if it was a fairlure of the educational system or some other influence.
  9. The ability to match student records between the PreK–12 and higher education systems
    • Student outcomes after high school may be the best way to measure PK-12 productivity. Entrance and placement exams may be the most useful data available.
  10. A state data audit system assessing data quality, validity and reliability
    • It is possible to do technical audits for compliance, but to get this story right one would have to combine this with direct observation to see if reported data aligns with what one observes in the field.

Wednesday, January 25, 2006

Data warehouse maturity

The Tri-State Partners (MN, MI, & WI) are at very different stages of maturity in their data warehouse develpment. The three states also at different stages of development in there student ID systems. One of the challenges we face is learning from the diverse capabilities and challenges of the three states.

Chris

Monday, January 23, 2006

Successful implementation of analytics

DMReview's 5 Principles of High-Impact Analytics applied to the development of systems for supporting longitudinal analysis of student data.

1. Recognize the Application Imperative
It is relatively easy to try and attack all aspects of decision support. One of the positive outcomes of NCLB is that many states and districts have used energy generated by the legislation to get people excited about using data to improve student learning. If teaching kids remains the critical focus of the analytics, success is more likely.

2. Democratize Information Assets
Many school districts have widely distributed decision making authority. Even in places where that is not the case, aides, teachers, and building leaders operate as relatively independent agencies. Decision support systems designed to support a few central office folks using annual data are not going to be received well. The lack of timely data is one of the common complaints heard from critics of district information system initiatives. Systems for schools and classrooms must be far more responsive if they are to find broad support.

3. Build Discipline in Decision-Making Processes
What the authors mean with this point is that there must be a link between the analysis provided by the system and well understood practices that affect the related outcomes. It must be clear that the analytics "have a place in the organization". Feedback that is uninterpretabil will be of little value.

4. Recognize New Skills Required for Knowledge Workers
Professional development for users of new decision support resources is vital. It will certainly cost far more than the system developed. This is another great reason to keep the scope of initial development narrow. This will keep training costs from blossoming out of control. New tools that no one has the resources to learn to use will undermine the effort.

5. Deal with Complexity: Closed-Loop, Adaptive Systems
The point of decisions support systems is to support improvement. This should lead to a virtuous circle in which analysis generated by the system is used to increase the quality of the data coming in and the related decisions. The system itself should be exposed to the same scrutiny. Lessons learned in early implementation should be incorporated as the system is extended into new areas.

Chris

Saturday, January 21, 2006

Knowledge Management in Educational Reform

Organizational Knowledge Capabilities is a good term to keep in mind while working on the implementation of decision support systems. This map makes clear the payoffs from effective knowledge process and knowledge infrastructure. One of the things I can help keep in front of the development teams is that no matter how good our infrastructure is, we have to have high quality processes in place or it will do little to enhance our ability to improve learning for kids.

Chris

Friday, January 20, 2006

New push for data quality

The Data Quality Campaign, a collection of 10 education organizations and funded by the Gates Foundation specifically identifies the ability to connect PK-12 system graduates with higher education students to tie performance in higher education to schools and districts.

This drive for quality has been consistent concern in the data warehouse and decision support press. DM Review is a great source for accessible writing on business intelligence and data management. They have publish a number of pieces on data quality. Of particular interest to us as we develop longitudinal systems for tracking student and teacher information are the implementation metrics of data quality and how to improve quality were it matters most. The author, Jane Griffin, suggests that one focus on "pain points" in data quality and determine why the most painful problems exist.

Chris

Wednesday, January 18, 2006

Views on Introduction of Growth Models in Education

The U.S. Department of Ed explains the history of the recent announcement (November 18, 2005) of a 10 state "experiment" on combining growth indicators with NCLB attainment goals. The NEA explains the move stating that the DoEd is "[f]inally responding to repeated calls by NEA and others for a more reasonable approach to measuring school progress."

A number of groups lined up to give voice to their support and concerns. A common thread from supporters was the sentiment expressed by the NEA above. The disregard for growth makes some fundamental ethical and methodological mistakes. On the other hand, supporters of high expectations for all kids (here the Citizen's Commission for Civil Rights) worries that the consistently high bar, schools with high percentages of underprivileged kids will get let off too easy. The Education Trust voiced similar concerns.

Both Jenny D. (and her response to EduWonk's concerns) and EduWonk have done a good job laying out the issues associated with implementing growth models and NCLB with numerous links to primary material.

For that matter, it's not just the US that is struggling with this issue. The UK has recently introduced value added reporting in order to provide more context for interpreting average school attainment. The Telegraph reports that this is an about face for the government. Education Ministry officials had previously stated that schools educating significant proportions of disadvantaged students should be held to the same high standards - not provided with an excuse for lack of progress. The Campaign for Real Education expresses many of the same concerns as the critics in the US (cited above). Jim Taylor of Lancaster University weighs in in favor of value added reporting, but with some caveats.

Chris

Tuesday, January 17, 2006

Value-Added and Schools of Education

Institutions of Higher Education are not escaping unscathed from the value-added debate. PK-16 and PK-20 projects are cropping up in various states - examples can be found in Oregon, Massachusetts, Texas, and Wisconsin. Education Trust published a report in Spring 2004 that (among many other things) specifically addressed teacher education programs by arguing that value-added analysis be used to improve teachers preparation programs. Louisiana is specifically targeting teacher education as a prime candidate for value-added analysis. The Value-Added Teacher Preparation Program Assessment Model provides a detailed description of the program rollout.

Chris

Sunday, January 15, 2006

Problems with Teacher Evaluation with Value-Added

There are technical and ethical difficulties with the small sample sizes that would be available to do teacher value-added analysis. Other groups, such as teacher unions, the National Association of State Boards of Education, and the National School Boards Association have entered the debate and are asking interesting questions about the circumstances under which value-added analysis is appropriate.

Wisconsin statue expressly forbids using single high-stakes assessment results to evaluate teacher performance:

Pupil Assessment Statute 118.30(2)(c)
The results of examinations administered under this section to pupils enrolled in public schools, including charter schools, may not be used to evaluate teacher performance, to discharge, suspend or formally discipline a teacher or as the reason for the nonrenewal of a teacher's contract.

Other the other hand, there are folks arguing for using value-added analysis of teacher performance. Steve Miller criticizes elected officials in Nevada for stepping away from teacher evaluation. In Tennessee just the opposite was effort was underway last year. Legislators unhappy with problems in the Tennessee value-added system introduced legislation to roll it back.

Chris

Friday, January 13, 2006

Details on state value-added systems

Pennsylvania provides details on its value-added assessment system (PVAAS). Fall 2006 is the first time initial adopters will achieve value added reports, but the details of the design and the FAQs for end users of the data are now available.

The Cleveland Plain Dealer supports the Ohio state-wide value-added system that will begin to deliver results in 2007.

School Administrator magazine includes both states in its assessment of the political and social barriers to rolling out value-added (or growth) models. The Council of Chief State School Officers CCSSO recently released an excellent comparison of growth and value-added models that includes both technical concerns as well as the relative difficulty of funding and explaining the various approaches.

Chris

Wednesday, January 11, 2006

Understanding value-added

Making sense of value-added assessment is no simple thing. The FAQ approach at the Operation Public Education site at University of Pennsylvania.

Milwaukee Public Schools has recently published its 4th year of value-added analysis (school year 2004-2005 - pdf). The statistical model used for the value added analysis generates a "“beat the averageĂ‚" score for each school. The beat the average indicator compares a schoolĂ‚'s gain to the district wide average gain in reading, language arts and mathematics, from 2003-04 to 2004-05. The district wide average is equated as "0"” on the school graph. If a school has a negative score, it does not mean it did not experience achievement growth. Rather, it means that the achievement growth at the school was less than the district-wide average growth in student achievement. The Milwaukee Journal Sentinal recently published a summary of the 4th year value-added results and a good explanation of how to interpret them.

Part of our agenda in VARC must address the substantial training obligation this new evaluation framework will create. Moving from simple reports of proportion proficient or average test score will require substantial explanations and use cases to make appropriate interpretation accessible.

Chris

Tuesday, January 10, 2006

Explaining differences in School Productivity

A report from EdSource Online called "Similar Students, Different Results: Why Do Some Schools Do Better?" reports the results of a large study in California that attemps to unpack how differences in school staff (their attitudes, experience, practices, etc.) explain different outcomes with similar students. The study found that parent involvment contibutes to student outcomes but that in-school factors such as teacher experience, stadards-based instruction, and early focus on student improvment by school leaders were more substantial.

Learinng Point Associates does an intesting look at this issue at the classroom and building level as it pertains to data use. They reported on the Bay Area School Reform Collaborative in August 2004. The following are a summary of the results:

"Summary of Recommendations
1. Schools need frequent, reliable data. Whether in the form of diagnostic assessments or qualitative data, teachers and school leaders need frequent feedback to identify strengths and weaknesses.
2. Teachers need support to use data. Teachers need professional development regarding how to understand data and how to take action on the data. They also need collaboration time to discuss strategies and visit each others’ classrooms to observe practice.
3. Race matters. Schools need to hire and promote people of color and provide structured, data-based opportunities for faculty to discuss how race and ethnicity affects students’ experiences in school. They should get specific regarding what equity should look like and then set measurable goals regarding how to reach that vision of equity.
4. Focus is essential. Schools should not try to do everything. Instead, they should choose what matters most and can be controlled within school walls and focus on it. One essential focus is to make sure that students are mastering reading/literacy skills; these skills are the foundation of learning."

What concerns me here in my efforts to support district- and state-level data system and decision support work is where to go with these recommendations. The things that seem to be in my area are improving accountability and assessment regimes and tools as well as pushing to make more learning opportunities around data-informed decision making available for state, district, and school staff.

Chris

Saturday, January 07, 2006

Value-Added and NCLB

Secretary of Education Spellings recently announced a new initiative to allow up to 10 states to include growth models of student learning that reward low performing schools for achieving high rates of growth as the move towards compliance with attainment requirements. Andy Porter, former Director of the Wisconsin Center for Education Research, describes value-added analysis as a Spring 2005 newsletter. The American Federation of Teachers addressed the promises and challenges of value-added analysis when addressing the deficiencies of NCLB.

Chris

Thursday, January 05, 2006

Supporting expansion of discussion on value-added

One of the problems with adopting value-added analysis models is that there is still relatively little published work on the topic. Operation Public Education at the The Center for Greater Philadelphia at the University of Pennsylvania is one source that pulls both technical and press accounts together with a description of value-added models. Vendors are also improving the level of discussion. Harcourt has its own technical report on the requirements and benefits of value-added analysis.

The VARC team intends to add to this discussion be providing both more general models for value-added analysis as well as models that address specific characteristics of various accountability systems (multiple vendors, content differences on tests, mid-year testing, etc.) The first of these publications can be found in the appendix of our Tri-State Longitudinal Data System grant proposal.

Chris

Tuesday, January 03, 2006

Open Source Value-Added Model

It is our intention to provide a set of open source value-added models that will both advance scholarly discussions (such as the technical evaluation RAND did of the Sanders model) as well as provide support more ambitious evaluation efforts to figure out what programs, policies, and curricula work best and under what circumstances.

We also hope to engage other researchers and policy makers around the use of language in value-added research. We explicitly use the term classroom impact to include both teacher effect and other classroom factors. Given the fractured nature of educational policymaking (from federal mandates to locally elected school boards) and the complicated mix of interests tied up in that network of constituencies, it will be vital for VARC projects to focus on shared interest in making better choices for program selection and resource allocation.

Chris

Sunday, January 01, 2006

Value-Added (aka Growth) Models receive official recognition

The Value-Added Research Center (VARC) intends to be at the forefront of both scholarly work on the appropriate use of test results as well as a leader in advanced techniques of applying value-added analysis to robust evaluation models. We are engaged in a number of different research and application projects that provide insights into various applications of robust evaluation designs. The VARC management team is made up of Center Director Robert Meyer, Gary Cook, Sarah Mason, Anthony Milanowski, and Christopher Thorn. WCER Director Adam Gamoran and School of Education Dean Julie Underwood are also engaged in the leadership team.

The core grant for VARC is a project funded through the Longitudinal Data Systems grants awarded by the U.S. Department of Education's Institute of Education Sciences. VARC principal investigators won 3 of the 14 grants awarded in a Tri-State Partnership with Minnesota, Michigan, and Wisconsin. VARC will be providing overall project coordination, data warehouse design assistance, and support for sophisticated embedded analysis.

Other VARC activities include the support of an embedded researcher in the Milwaukee Public Schools Assessment and Accountability Office, an 12-year study of class size reduction (known as the SAGE program in Wisconsin) that grows out of ongoing qualitative and quantitative work on the program, and providing value-added analysis assistance to NSF-funded Math and Science Partnerships.

VARC is being launched this winter as both an umbrella organization for related work and as a vehicle for exploiting complementarities between research and application projects.

Chris