Sunday, December 10, 2006

Data Quality Campaign a good idea, but....

The Data Quality Campaign has done a good job highlighting the work of groups who have been working on the issues of data use in school improvement for years. The DQC has been helpful in that it both focused these (and other affiliated and similarly-minded groups) on a series of core metrics for measuring state-level progress in system building. The dilemma here is that except for the those states that provide school information systems to all schools from a central source, most of the data is annual accountability information and is almost irrelevant for local decision making at the school or classroom level. The data needed by states for accountability and monitoring is good for studying the impact of programs and policies, but lacks the real time and frequent needs of building level staff for planning lessons and addressing gaps in students' knowledge.

The way that these efforts may benefit student and teachers is the university requirements for student and teacher ID numbers and the application of modern information management principles and technologies to the education sector. The demand for more granular, high-quality data by state education agencies from local districts has provided much better direction to district research and IT staff as well as to vendors of systems at these levels. The real changes in data-informed decision making will take place as districts and schools have access to better training and tools for data use. The DQC can take some credit for raising the visibility of these issues.

Chris

Wednesday, December 06, 2006

Value-Added Research and educational infrastructure

One of the things our research team struggles with on a regular basis is the huge gaps between what educational information systems track and what we need to correctly attribute programs to teachers and students. It has been rare - in my experience - to find that a district student information system comes close to managing the complexity of teaching and learning in modern schools. If we want to know the effectiveness of a particular 4th grade reading program, for example, one would want to have strong attribution of the following links.

  • Which adults are engaged in the instruction? Given multi-grade classrooms, team teaching, trade offs between subject matter exports who are not the primary instructor, etc. it is often difficult to determine who (and there may be several adults) is doing the teaching.
  • What is being taught? While we may know what books were purchased, it is difficult to know if a teacher is actually delivering the purchased curriculum as intended. There may be custom additions or replacement of sections. The curriculum may be difficult and the teacher is struggling with the material as well.
  • Which kids are in the room? Pull out programs, ability grouping, and student mobility may all cloud the picture and make it difficult to determine who was in the room to do the learning.
  • What resources does the teacher bring to bear? There are important aspects of teacher training (original university work and ongoing professional development) that provide important insights about what works from the input side of the production of student knowledge.
This is not to say that no one tracks these things. However, I would suggest that it is far more difficult that even most district administrators recognize. It is beyond the capacity of some large fraction of the school management software being sold today.

Chris

Friday, December 01, 2006

Fixing NCLB Suggestions

The recent election results have revived calls for fixes to NCLB. A recent Star-Tribune opinion piece sums up most the changes recommended by many constituencies. The National School Board Association actually keeps a running list of relevant activities and publications that focus on reforming NCLB.

At the same time, both of the major teacher organizations have reviewed and updated their positions on NCLB and the reforms the see as necessary - American Federation of Teachers and
National Education Association.

Chris

Wednesday, November 29, 2006

Ties between high stakes assessment and dropouts

One of the concerns about high stakes for high schools - in particular - is that there is increasing pressure for schools to focus resources on the students close to proficiency to "shove them over the bar". The message that this sends to other students who are farther from proficiency is that they are on the periphery and do not matter. This is very likely to rise their likelihood of dropping out. At worst, these students are encourage to move to other schools or drop out.

Chris

Wednesday, November 15, 2006

Challenges in addressing attainment and growth goals

This debate summarizes much of the concern about growth or value-added models and the risk that they will dilute the push for high expectations for under-performing kids and schools. There are at least two ways that these concerns can be addressed. One thing that value-added measures can be used for is to identify schools that are "beating the odds" and delivering higher than average growth in student learning. This both provides an "existence proof" to show what is possible as well as a target for evaluation to figure out the mechanisms of success.

The other important contribution of value-added analysis to an high-stakes, high attainment system is to provide insights into what kind of growth exists in current systems. There is some concern that there many be very few schools in the nation capable of delivering the rates of growth needed to achieve NCLB standards by 2014. This is vital policy information. If nothing we are currently doing - in terms of teacher education, professional development, curriculum, etc. - can deliver the growth needed, then more radical interventions in these areas are necessary.

Chris

Thursday, November 09, 2006

Producing high quality teachers

In an opinion piece, William Graves - Dean of Old Dominion University's Darden College of Education - supports changes in teacher education programs. Old Dominion offers a warranty for its teachers. Graves points out that Old Dominion already implements many of the suggestions recently put forward in a report by a former Columbia Dean Arthur Levine. One of the most interesting things about the Levine report is that vast differences in quality between education programs. One of the things that I like about this discussion is the focus on what constitutes a quality program. The recommendations address the "how would you know?" question.

Chris

Tuesday, November 07, 2006

Lovaglia's Law - Bad news for Evidence-Based Decisionmaking

Bob Sutton quotes Michael Lovaglia as follows:
Lovaglia’s Law: The more important the outcome of a decision, the more people will resist using evidence to make it.
As someone who works to help bring better data to bear on low- and high-stakes decision making in educational systems, this is troubling (but not that surprising) as a working hypothesis.

Chris

Monday, November 06, 2006

TIF Awards and growth at the district level

The first Teacher Incentive Fund grantee has recently been announced. This will start a round of federally-supported experiments in generating and using school-, grade-, and classroom-level growth or value-added performance models. There is no telling how well thought out these models will be, but their very existence will drive forward the discussion of equitable treatment of teachers, leaders, and students. It will require districts to focus on both growth and attainment. While some of the discussion will inevitably be contentious, the ability to tie growth outcomes to educational practices is welcome shift.

Chris

Saturday, November 04, 2006

DOE preparing to announce additional states allowed to test growth-model

While there is a short list of possible candidates who will be considered for growth model permission for next year, that is only a foretaste of what might be on the horizon. Eric Hanushek - the Hoover Institute scholar who chaired the first review panel - notes that with the creation of student id systems and grade level testing in almost all states, the number of possible candidate will quickly climb to all states. So, despite the fact that the DoEd is taking the slow road to growth models (laying aside the issue that none these models actually study growth), it will be possible for states and districts to experiment outside of official program participation. Many groups, both national and at the state level, are asking questions about school productivity and attainment. As it becomes possible for states to explore growth models, we may start to see states driving policy experiments and the Feds playing catchup.

Chris

Monday, October 23, 2006

Help with answers to the "Now what?" question.

The Center on Innovation and Improvement has a nice collection of research and reports focused on school and district improvement. This site points to recent research on effective practices as well as links to federal programs and support resources around the country.

Chris

Tuesday, October 10, 2006

New agenda for Texas education

Whether one agrees with all of the efforts taken by Texas, it is not hard to see that education leaders are out ahead in most policy areas. Standards-driven reform linked to testing (that improved in quality from the early years of poor quality tests) have sent strong signals to all levels of the educational system. Jim Windham of the Texas Institute for Education Reform draws from a recent policy presentation to lay out his group's agenda:
Enhance educator effectiveness: No education delivery system can be better than the educators in the school building. We need much better and more competitive preparation, certification reform, research-based professional development, effective mentoring, performance-based compensation, value-added evaluation, mandatory remediation and dismissal of ineffective educators.
Raise standards: After 10 years, it is clear that TEKS needs a complete overhaul. The expectations for our kids are too low, there is no grade-level specificity, no progression of rigor from grade to grade and in many instances, the standards are not measurable.
Strengthen accountability: We should phase into a 90 percent proficiency standard for accreditation of a campus, strengthen the consequences for school failure, adopt statewide public school choice, and expand charter school authority with equalized funding and tougher standards.
Refine academic performance assessments: We should adopt value-added evaluation for charters, educator preparation programs and educator compensation; add end of course exams in high school; and connect all assessments to college and workplace readiness expectations.
Finally, we should create a comprehensive agenda for systemic long-term reform for public education that will fulfill the objective that every child in Texas will graduate from high school fully prepared for higher education, the 21st century workplace and responsible citizenship.
This potpourri of efforts attempts to address almost all elements of the production chain for primary and secondary education. The one thing that is not mentioned explicitly is increased accountability of schools of education to train better prepared educators and leaders.

Chris

Sunday, October 08, 2006

Crappy People versus Crappy Systems

In another post by Sutton, there is an important lesson about organizational improvement. As much as school turn-around efforts rely on great leaders, there is a fair body of evidence that even star performers cannot overcome broken systems. One of the concerns I see for school improvement along NCLB lines is that that blinders that focus on reading and math scores may induce district staff for focus on getting leaders into buildings who can turn things around. It may be more important to improve the alignment of curriculum and professional development resources than fix leadership. If it is the crappy system that is getting in the way of success, it is vital that we step back to take a look at the incentives and sanctions inherent in the current system.

Chris

Monday, October 02, 2006

Bad assumptions and submarines

Bob Sutton does a great job laying out the problem with pay for performance policies that are sometimes linked to value-added measures at the classroom level. Pay might provide an incentive to a teacher to work harder or smarter, but there are a large number of things that have an impact on classroom level growth that are outside the teacher's control. If we are going to provide incentives at that level, it is crucial that we also ensure that resources are equally allocated and that we don't implement bad tests that force our teachers to teach to low/poor standards.

Chris

Saturday, September 30, 2006

Testing to go nation-wide?

The debate on national standards is alive and well.

Chris

Thursday, September 28, 2006

Research Report at Pearson (PDF) discusses growth models

This paper - buried on the Pearson research pages - provides some useful definitions of terms and lays out some of the difficulties one encounters when developing growth models.

Chris

Monday, September 25, 2006

A look at California's API and its use as an accountability measure

California's API (Academic Performance Index) is criticized as inadequate for evaluating school progress or for judging the ability of schools to meet the needs of different sub-groups. The use of a complex indicator also makes interpretation difficult.

Chris

Saturday, September 23, 2006

Return on Investment and Reform

The Doyle Report takes an interesting look at our amazing inability to actually consider data when we think about school reform. There is probably ample evidence to support a hypothesis that year round schooling would be far better for achieving desired educational outcomes. In general, it would be interesting to actually with some level of confidence that education reform effort A actually adds more value to student learning than effort B.

Chris

Monday, September 11, 2006

On the road to the UK and France

Posting is likely to be a bit spotty. I am off to give a couple of talks at the Strategies in Qualitative Research conference at the Durham University, UK on the maturity of qualitative research tools. I'm going to repeat that talk a few days later in Lyon as well as give a presentation to several teams within ICAR - CLAPI and others - on the scholarly history and current capabilities of an open source, video transcription and analysis tools we developed here at WCER - Transana. I'll be back in the states in two weeks.

Chris

Sunday, September 10, 2006

An interesting back to school series from the Pittsburgh Gazette

The piece on the fifth focuses on the impact of NCLB from narrowing of curriculum to holding schools accountable for achievement gaps. The follow on pieces look at other major trends in urban education.

Chris

Friday, September 08, 2006

Another sotory about how AYP woes play out in South Bend Indiana

Here's another good piece to look at how AYP failures and successes play out in an Indiana town. There are many schools that area failing to meet expectations. There is going to be increasing pressure for change. The question remains is the change going to happen in the educational system or will members of the U.S. Congress find that they cannot stand the heat and back off on accountability.

Chris

Tuesday, September 05, 2006

11 Districts begin using value-added results in Pennsylvania

Pennsylvania is using SAS to do value-added analysis of its test scores. 11 districts are already using the data as early adopter. Sanders does a nice job providing a non-technical description of value-added metrics. The state also provides some info on cost - $2.00/year/student - when all 501 districts are on board. Unlike a couple of other states, value-added assessments will be used diagnostically to track student progress but not to evaluate teacher performance. This piece gets to a lot of important issues, including growth of students who are already at high attainment. Take a look.

Chris

Saturday, September 02, 2006

Examining the incentives for and against reducing high school dropout rates

The Brownsville herald does a good job examing the incentives confronting teachers and schools around dropouts. Districts want dropout rates to decline to avoid systemic sanctions aimed at districts for high ratios. Individual teachers, on the other hand, may actually have incentives to counsel students out of attending, since the pressure on teachers is to get classroom, grade level, and school scores on state tests to increase. Students who are very far behind would require much more help than students who are near proficiency. The incentive is to neglect the students at the lowest levels.

The article goes on to study the economic impact of dropouts on the economy of the region. The fact that dropouts earn far less than do graduates means that they pay less in payroll taxes, are likely to consume lots of social and/or welfare services, and increase the likelihood that the individual will engage in criminal activity.

Chris

Thursday, August 31, 2006

Californina has spent $70 million to almost be able to track students over time

So, $70 million doesn't buy as much as it used to. This article claims that state officials are knowingly underreporting dropout rates to avoid federal sanctions. California is no exception in its difficulties to build a longitudinal system for tracking students and their outcomes. As the author points out, only Florida and Texas have truly robust systems. Other states, such has North Carolina, have had hugely expensive failures. A new bill has been introduced for this year's legislative session that would fund the completion of the system. But, even some of that bill's backers - namely the teachers' union - does not support the next logical step of linking teacher IDs to students.

Chris

Sunday, August 27, 2006

The Plain Dealer worries about increasing complex school report cards

The increasingly complex school report cards and the looming introduction of the "mind boggling" value-added analysis to the Ohio accountability system has folks worried that the will not be able to use or trust the new information. This is a real problem for NCLB. The bar on accountability is going up, but Joe Public doesn't know how to read the instructions for operation.

Chris

Friday, August 25, 2006

Money makes the world go around, the world go around,....

Money may be an important fix. Very high salary levels would ramp up competition for slots that would squeeze out mediocre candidates. I'd like to see the math on that on what sort of wage would encourage enough people to retool for teaching.

Chris

Thursday, August 24, 2006

Higher education and value-added - check the discussions

When I first found this, I thought it was an interesting piece on the logic and difficulties of measuring the value added by a multi-path undergraduate experience. However, the discussion in the comments is even more interesting. Supporters and detractors present interesting cases that push the limits of where value-added can and cannot be applied. Attainment is still the goal. That's no different in higher education. However, the difficulty of coming up with useful (comparable) metrics is a tough problem. Generating the motivation to take any post-test seriously is another mountain of an obstacle - even if one could conceive of an appropriate testing regime.

Chris

Sunday, August 20, 2006

Knox County, Tennessee Schools (and the rest of the state) dodge some bullets with growth models

As predicted, the introduction of growth metrics as an alternative method for achieving adequate yearly progress has reduced the number of schools identified as failing. 8 schools met AYP using the new growth standard. Overall the number of schools not making AYP dropped from 159 to 96. More details about the response to being on the failing lest are available here.

Chris

Friday, August 18, 2006

About that "weighing the pig" analogy....

Many of you know the "you don't make the pig grow by weighing it" analogy and have heard it applied to NCLB testing requirements. That's bugged me for quite some time as a pretty weak statement. I'm not a fan of testing for no reason and I hate tests that are not aligned to the education system's learing goals. Poorly aligned tests provide the worst sort of incentives and almost no vaulable feedback.

However, I still have a problem with this analogy. Let's think this through. I wouldn't want to compare education to fattening a pig for market. What do we know about pigs?
  1. The growth trajectory of a pig from birth to slaughter weight is a pretty well understood piece of meat science. For more info specialization in this industry see the USDA background brief on hog farming.
  2. Pigs have range of growth rates that is well understood. The grow at well understood rates throughout different periods of their maturation. The farmer only needs to provide food and water.
  3. While weighing the pig does not make it go fatter, you better believe the farmer weights the inputs to pig growth (feed) very carefully. Farmers know very specifically how much of which kinds of nutrient sources a pig needs throughout its life cycle. It is in the farmer's interest to do this well and consistently through all stages of growth.
  4. The growth characteristics of a pig are inherent to the pig. The only time the pig needs to be weighed is at time of sale to get the overall purchase weight. If the farmer does the production right, the pig will weigh what it is supposed to weigh at the time of sale. The weighing is about the commercial exchange between farmer and buyer.
What about student learning?
  1. The growth trajectory of child learning is known and taught in child development courses, but we also know that there is great variance. We don't just want to get a child to some arbitrary amount of learning based on the child's characteristics. We have societal norms about what is expected for participation in a democratic society.
  2. The growth of children is affected by the peer group. Pig A does not grow faster or more slowly if Pig B is a slower or faster learner. Peer effects in classrooms and schools is profound. One could think of this as one of the inputs to learning that we don't know much about.
  3. The pig will grow if the farmer is poor or rich, black or white, if she likes or does not like pigs, etc. Kids learn better if they have supportive parents, come from homes with adequate resources, speak English well, etc. The thing pigs bring to the pen don't affect the pig's growth (apart from genetics around size itself).
  4. We know almost nothing about the inputs to education - at least compared to our farmer. When we look at how much a student has learned, an enormous number of factors enter in to explain the variation one sees across the population. We collect almost none the data one would need to understand that variation.
  5. The one thing we seem to have in common is the notion of the buyer weighing the pig/student. We publish test scores for schools and districts as a way of telling the buyer (taxpayers and voters) what they got for their investment in public education. That's about the only similarity I can see. Particularly since, we as citizens have already paid for the education at that point. This is a retrospective look to see what we eventually received for what we paid up front.
All in all, I don't see that the analogy holds much water.

Chris

Wednesday, August 16, 2006

eScholar Data Definitions - Embrace and Extend?

K-12 data warehouse vendor eScholar provided data definitions for 750 elements to the pK-12 Data Model initiative sponsored by the U.S. department of Education, National Center for Education Statistics and mapped to both the SIF and NCES data standards.

This is a bold move for eScholar, but might provide additional leverage to market to states and districts, since they could guarantee data element alignment out of the box. Microsoft has a followed the embrace and extend model several times. This happens when an influential firm adopts an open standard and then adds substantial features to that standard that can, however, only be used by folks using the "extender's" systems.

This will be an interesting development to follow.

Chris

Sunday, August 13, 2006

Critique and recommendations from the NEA as NCLB is considered for renewal

The NEA discusses some of the unintended consequences of NCLB such as:

  • Clustering of students with disabilities in separate pull-out schools to reduce the risk of failure for regular neighborhood schools
  • Special education teachers are often required to demonstrate highly-qualified status in multiple disciplines. Many states do not yet have workable regimes for making this possible
  • Alternate assessments were specified by NCLB, but only 10 states currently have them in place. It is difficult create quality tests of any sort. Regular education testing has exceeded the testing industry's capacity. Alternate assessments are even further behind
This is a pretty lucid piece of argument from Patti Ralabate presented by the NEA to the Aspen Institute's Commission on No Child Left Behind.

Chris


Friday, August 11, 2006

Welcome to the math of NCLB - many schools slide into "failing" status

In Vermont, schools categorized as failing climbed from 10 to 61. One of the causes of this five-fold increase was the inclusion of more children who had not been included in the testing system. This is the bane and boon of NCLB. The law requires that the schools be held accountable for teaching and testing all students. At the same time, the rising bar means that many schools will have to achieve gains in learning that may not be possible under any circumstances - particularly with the more need children.

State leaders cite growth models as a way of better reflecting what students are accomplishing on the path to proficiency. Many of the states that applied to the U.S. Department of Education's growth model pilot program cited reducing the number of schools on the "failing" list as one of the key motivators.

We had a study group here at WCER go through all of the applications for the growth model exception. While the is a reduction in the number of schools identified as failing in the short term, the attainment requirements don't get lifted. Under a growth model, it would simply put off the dramatic increase of schools identified as failing for a couple of years. At that point, the numbers would climb even more spectacularly.

That may actually be part of the strategy. The mechanistic application of the "failing school" label to the many, many schools that are actually doing pretty well for most students would likely create a huge political backlash against NCLB. I don't actually own a tin-foil hat, but this sort of logic would not surprise me.

Chris

Wednesday, August 09, 2006

Virginia pushes ahead with state-wide SIF Project

Edustructures announces the launch of the second phase of their state-wide SIF implementation with a Student Locater application. The list of states deploying (or considering) state-wide SIF include; Wyoming, Oklahoma, and Pennsylvania. Delaware, Nevada, and South Dakota support the submission of some data using SIF.

Vertical integration, the ability to include XML markup that is not part of the SIF standard, and a web-services reporting API seem to be important feature in version 2.0 of the SIF that will continue to make it a compelling element of system designs for the at least the near- to mid-term.

Chris

Monday, August 07, 2006

Center for Analysis of Longitudinal Data on Education Research Founded

Faculty from the University of Texas at Dallas are collaborating with the Urban Institute's Education Policy Center's prinicipal investigator, Jane Hannaway, as well as with scholars from Duke University, Stanford University, the University of Florida, the University of Missouri, and the University of Washington.

The researchers will be using state-wide databases from Florida, Missouri, New York, North Carolina, Texas, and Washington as the basis for their research. The research will focus on issues surrounding teacher quality (see Ed Week - subscription).

As Hannaway notes, this data will only be avaialble for secondary analysis to members of CALDER. FERPA concerns prohibit wider access. That's a bummer.

Chris

Saturday, July 22, 2006

Blogging got overtaken by proposal writing

Our value-added research group has been working on a number of proposals linked to various U.S. Department of Education and a couple of private foundation deadlines. It just so happens that they all fall in the latter half of July. Several are out the door and several more are due this coming week. We are proposing more work that links inputs (financial and human resources, professional development, curricular materials, etc.) with outputs (classroom practices, student tests, grades, attendance, etc.) in one urban district. This work would tie together ongoing school- and grade-level value-added analysis of test with data to other information systems that have traditionally been kept separate.

On the national front, we are bidding with one of applicants for the technical assistance center that will support the U.S. Department of Education's Teacher Incentive Fund. It's a bit of a pig in a poke since we don't know who will be awarded the TIF grants. So, there's no telling how challenging the work will be. We do know from years of experience that getting the models right is brutally hard. For example, if testing is done in late fall or early spring, growth cannot be simply assigned to a single grade or school. The credit for growth has to be apportioned proportionally to the two grades in which it occurred. Other issues such as how to handle mobile students and teachers and changes in test forms also complicate models. Students retained in grade present a challenge for growth modellers. If a student is retained in fourth grade and takes the fourth grade test again, what is the growth one would expect from the 4th grade to 4th grade?

If we win even half of the work proposed, we will have new windows into aspects of schooling and management that will advance our understanding of how best to support decision making at all levels of the educational system.

Chris

Tuesday, July 18, 2006

Indiana wants test scores measured over time

Like many other states, Indiana wants to move to some sort of growth or value added modeling to keep more states off the failing list. We've been looking at this at Wisconsin Center for Education Research. We recently evaluated several of the growth models proposed by states in a series of brown bag sessions this summer. We used longitudinal research data we have in house to impose various state models on the same data set to get an apples to apples comparison. One of the clear factors common to most of the models is that they decrease the percentage of schools identified as failing by 10-12%. This is only a year one effect, however. By 2-3 years out, if the high growth rates predicted are not met (and that's not to surprising), all states are back at high percentages of schools failing. So, the current flurry of activity may only buy 2 or 3 years of coverage.

Chris

Saturday, July 15, 2006

Ed Trust takes a look at teacher quality

Just in case you wondered how much teach quality matters - particularly vs exposure to advanced curriculum, Ed Trust has some sobering findings. In a report published on July 8th, the authors (Heather G. Peske and Kati Haycock) provide some scary numbers that reflect things we've heard before like - students in schools with high percentages of poor and minority students are twice as likely to have novice teachers. They are also more likely to have teachers teaching out of their primary subject area/area of certification. This was a particularly scary quote:
[T]here were stunning differences in levels of readiness according to the quality of teachers in a school. In schools with just average teacher quality, for example, students who completed Algebra II were more prepared for college than their peers in schools with the lowest teacher quality who had completed calculus.
These numbers are the sort of thing that would make me want to go hide under the bed.

Chris

Thursday, July 13, 2006

Just in case you were wondering why realtors support school report card sites

Here we have a study that shows just how much house values in crease with a 20% increase in students meeting proficiency - 7%. It seems though that the authors have a funny take on value-added reporting when they study its impact on house values. Just looking at the change in proficiency rates between 4th grade and 9th grade cohorts isn't value added, since you don't know if it is the same kids or any of the qualities of the instrument.

Time to go back to school.

Chris

Tuesday, July 11, 2006

Missouri struggles to show that new tests meet NCLS quality standards

Missouri administered new tests in grades 3-8 this spring and must now provide technical data to show that the tests meet test quality requirements set forth in NCLB. Maine and Nebraska are the only states that have had tests disallowed. In Maine it was the use of the SAT as the high stakes test for hight school students. Illinois is likely to encounter similar feedback as it attempts to use the ACT for high school assessment. NCLB requires that assessments be aligned with state standards. It would be hard to argue that national college entrance exams reflect local standards.

Chris

Monday, July 10, 2006

Discrepancies in graduation rates in the news

A recent Editorial Projects in Education Research Center report published by Ed Week on graduation rates points out discrepancies in state graduation rate calculations (by state) and shows the status of efforts to implement a nationally-accepted definition of graduation rates proposed by the National Governors' Association. What the report (and the article linked to this post) reveal is how hard it is to get agreement on, define, and implement what sounds - on the face of it - to be a fairly simple concept. Image the difficulties surrounding dropouts and who gets the "credit" in a high stakes system for that drop out. The last school? What if the student was only there a week? Who gets the credit for a graduation if the student spent most of his or her time at one school and then switches to another, less effective school in the last semester?

These difficulties are all over high stakes data analysis.

Chris

Friday, July 07, 2006

Not a simple story, but evaluators have to love this headline

"State hurting education by not funding data collection"

Hewlitt Foundation Education Program Director Marshall (Mike) Smith and Hewlitt Program Manger Kristi Kimball have some fairly strong words for the legislature's failure to provide adequate funding for the collection of high quality district data. This is one of the problems with which every state in the nation needs to grapple. The siloed organizational structures of school districts and state educational agencies are a manifestation of the compartmentalization of funding and accountability from both state and federal agencies. Decades of developing stove pipe reporting capacities are fundamentally inadequate for the task of addressing questions about "what works" in education. The effectiveness of complex social phenomena, such as effective educational practices for particular communities, are difficult to measure under the best of circumstances. In education we have data systems designed for different purposes and staff who have always been rewarded for hoarding data and reporting up - not using the data.

The inability of state systems to address "bang for the buck" questions continues to stymie legislatures. After years of building state and district capacity, California seems to have snatched defeat from the jaws of victory.

Chris

Wednesday, July 05, 2006

Michigan jumps into the Pay for Performance fray

Michigan seems poised to join Minnesota and Colorado in using pay increases tied to school level performance as an incentive for teachers. As the reporter notes this comes in the wake of an announcement by the Bush Administration to provide $500 million in supplemental funds to support pay for performance plans. It looks like value-added evaluation capacity might be a highly sought set of skills.

Chris

Monday, July 03, 2006

Florida sums up its options under NCLB and 2007 testing

Florida, like a number of other states, is looking at a substantial number of schools (535) failing to make AYP targets for 5 years running next year. If it happens again, they go under automatic restructuring. However, like many other states, there are a number of schools in this list that are successful in most areas and are only missing AYP in one area. There are no exceptions however. Schools missing AYP at all are subject to restructuring.

This dynamic is one of the reasons that Florida (and most of the other applicants) applied for the U.S. Department of Education's growth model experiments. Under a growth model that predicts eventual proficiency, students who are currently under AYP, but can be predicted to achieve proficiency in the future, can be counted as proficient.

It is no wonder that states with the capacity will want to take this path. It does not obviate the need to achieve 100% proficiency, but it puts the extreme sanctions into the more distant future.

Chris

Saturday, July 01, 2006

A Policymaker's Guide to the Value of Longitudinal Student Data

This posting comes from 2002, but these recommendations remain solid policy recommendations. The disturbing thing is how far many states and districts still are from making this a reality.

Chris

Monday, June 26, 2006

Growth Modeling and Grad Student Training

Not sure when we will get some publications out of this, but we are running a set of meetings between our NSF-sponsored Interdisciplinary Training Program and the Value-Added Research Center. Here is our internal announcement. I'll post content as it becomes available.

Chris

ITP/VARC Summer Workshop

The Interdisciplinary Training Program and the Value-Added Research Center at WCER will jointly sponsor a summer workshop on "Growth Models for Adequate Yearly Progress."

When it comes to labeling good and bad schools, NCLB is caught on the horns of a dilemma: Use absolute targets to set the same standards for all schools, or focus on growth to recognize progress relative to student starting points. Until now, NCLB has taken an extreme position by focusing exclusively on absolute targets, which rise at varying rates but must reach 100% proficient by 2014.

Secretary Spellings recently granted waivers to two states -- Tennessee and North Carolina -- to pilot a "growth model" approach to Adequate Yearly Progress. Have these states solved the dilemma of recognizing growth while maintaining absolute standards? That is the question we will consider in our summer workshop. We will also examine the 6 unsuccessful applications to see why they were rejected and whether we like their solutions any better. Finally, we will examine real data from Milwaukee to see how the growth models will play out.

The summer workshop will be held on the following schedule in the 13th floor board room of Education Sciences:

Wednesday, June 7, 12:00-1:00pm -- Introduction: Adequate Yearly Progress under NCLB
--In this session we will review AYP, examine the call for growth models and the instructions to peer reviewers, and select state proposals for study.

Wednesday, June 21, Review of State Proposals and Peer Reviews

Wednesday, June 28, Review of State Proposals and Peer Reviews
--In these two sessions, participants will (a) describe selected state proposals; (b) present the reviewers' critique; (c) present their own critique; and (d) reach a conclusion on how close the proposal comes to solving the fundamental dilemma. As a group, we will decide which approaches are worth trying out with real data.

Wednesday, July 5, Conclusion: Application of Growth Models to Milwaukee
--In this concluding session, we will use data from Milwaukee to carry out the procedures selected in our review of state proposals, and reach a conclusion about the prospects for growth modeling under NCLB.

Friday, June 23, 2006

AEI weighs in on the politics of NCLB

Frederick Hess and Michael Petrilli lay out the history of NCLB. They explicitly focus on the frustration felt by members of Congress about the lack of results from decades of reforms intended to address the achievement gap. There was a consensus that high expectations were important, but the censuses did not extend to root causes. Some thought that low expectations themselves were key. Others identified lack of resources for low achievers as the most important problem to address. A third explanation was school culture and a lack of effective organizational structures that led to institutional paralysis. However, the censuses that all students should be able to learn and all schools should be able to produce significant achievement gains bridged these different understandings.

Politically, the attraction of the NCLB consensus was that it allowed public officials to embrace high standards and champion equal opportunity without having to prescribe uncomfortable solutions or explain exactly what strategies would enable schools to succeed....Ultimately, NCLB was intended to provide political cover to superintendents and school board members to encourage them to take controversial and difficult steps to root out mediocre teachers and administrators, shift resources to poorer schools, challenge collective bargaining provisions regulating teacher transfer and preventing efforts to link pay to teacher quality, and overhaul central office processes.
Hess and Petrilli do a good job laying out the history and the trends that are likely to find their way into the reauthorization. The consensus around high expectations seems to be holding. The major change is the inclusion of growth in student learning as an additional feature (not a replacement of attainment requirements).

Chris

Monday, June 12, 2006

More folks looking at NCLB as a deeply flawed policy instrument

Utah is officially adopting state legislation that preferences state standards over NCLB. Many states are moving to lower proficiency standards because NCLB expectations for 100% proficiency for all groups is unachievable.

"In some ways it's creating a race to the bottom," [Michael Petrilli of the Fordham Foundation] said. States are obliging schools' and parents' requests to make the tests easy enough to achieve "socially acceptable" pass rates, he said.

How do we keep the best of NCLB - the notation that all kids deserve a good education and that we should be able to know that they are getting it - without the ugly incentive to dumb things down?

Chris

I'm off to chaperon a bunch of middle and high school choristers on a 10 day tour. Back on the 22nd of June.

Saturday, June 10, 2006

More info on the Sanders study of learning benefits of board certification

The Greensboro News-Record provides a little more detail on the study done by William L. Sanders "found that students of nationally certified teachers make no greater classroom gains than do other students".

Not exactly what folks arguing for greater professionalization and higher qualitative standards in the teaching workforce are the key to greater student learning. One worry is that tying incentives to board certification may overlook equaly effective teachers.

Chris

Thursday, June 08, 2006

North Carolina - Sorry about that test, do your best.

It turns out that North Carolina decided not to use this year's new test to evaluate student performance. Teachers are instructed to use grades, classroom tests, and other tradtional measures to judge student proficiency for promotion. Hmmmm. This is one of the two states that is going to be allowed to use growth modelling? It seems that the last time the state delivered new tests, it go the cut points for proficiency wrong - based on the fact that too many students scored poorly on the tests. The state plans to spend the summer getting those numbers correct.

That seems a little strange that the numbers can be so subjective. I guess the right percent proficient is the one that shows most schools making their AYP numbers. That must be right.

Riiiiiiight.

Chris

Tuesday, June 06, 2006

Even Florida doesn't get the Growth thing right.

Florida is widely thought to have to most comprehensive student information system in the US. This piece lays out the impact of not being selected for as a growth model pilot in pretty stark terms.
If it had been accepted, an estimated 43 percent of Florida's schools would have made Adequate Yearly Progress (AYP) under NCLB. Without it, only 29 percent are expected to make AYP, down from 36 percent last year.
With numbers like that, it is no wonder that states are concerned about access to this alternate method to show student progress.

Chris

Sunday, June 04, 2006

Schools Matter - A not very flattering analysis of the recent growth model choices (and NCLB in general)

Schools Matter takes on both the recent decision to allow only North Carolina and Tennessee and tinkering with sanctions to allow more supplemental services before schools become accountable. Blogger Jim Horn notes (as do many others) that NCLB seems designed to crush public schooling by showing nearly all schools to be failing under the law's inflexible and (probably) unachievable metrics.

Chris

Friday, June 02, 2006

NSBA outlines recommendations for the next round of NCLB modifications

While most observers do not expect there to be significant changes in NCLB until an eventual reauthorization in 2009, the National School Boards Assocation is pushing for a series of fixes to the existing legislation.

Chris

Wednesday, May 31, 2006

An Adoption Strategy for Social Software in the Enterprise. Many-to-Many:

While this post explictly addresses "social" software, its lessons can also be applied to decision support systems (DSS). DSS do not always contain collaborative tools (apart from email or notification features) they are about sensemaking at a distance. The goal of DSS is to help users and decision makers at different levels of the enterprise to make sense of larger or smaller sub-systems. The lessons of rollout and the role of leadership are appropriate for decision makers in the education sector as well.

Chris

Monday, May 29, 2006

Growth Models: Results of the pilot request evaluations

Secretary Spellings and the US Dept. of Ed publish the winners and losers in the first round of pilot states.

Chris

Saturday, May 27, 2006

Measuring education outcomes, it's not as easy as it sounds

Graduation rates are an important measure of success for the educational system. The difficulty of measuring drop outs and graduate is outlined in this piece from Kansas. It's hard at the state level and even harder at the federal level. The author gets the problem statements right. We need to have higher quality data and we need to focus on how we are failing those kids who don't get diplomas.

Chris

Thursday, May 25, 2006

Teacher commitment matters more than certification or experience

Well....I guess that's not to hard to understand. Committed teachers in early grades who take personal responsibility for student learning tend to produce more learning growth.

LoGerfo's other findings reinforce other widely held precepts - supportive leadership makes a difference, Catholic schools tend to show higher commitment than public schools.

As we work on getting the models right, it will be important to make sure that teacher practice and perceptions make it into the mix.

Chris

Tuesday, May 23, 2006

Hitting the high standards ceiling?

As Schrag points out, it is not surprising that the warts on NCLB are becoming increasingly clear several years into the implementation of the legislation. States with high standards, such as California, are being penalized since it is becoming ever more difficult for schools to meet the expectations of 100% proficiency in the face of tough standards. Indeed, a judge in California just issued a preliminary injunction against state barring it from using the high school exit exam to bar students from graduating. The argument was that many students were not given an adequate opportunity to learn - given their exposure to poor schools and teaching. Without equity of access to a quality education, it is argued that students should not be held to these high standards.

Schrag also points to extreme examples of the narrowing of the curriculum in many schools. Some schools went as far as teaching only math, reading, and gym. Is this what Congress intended?

Chris

Saturday, May 20, 2006

Milwaukee Public Schools Superintendent used growth metrics as part of reform package

Milwaukee Public Schools is now in its fifth year of reporting value-added metrics for individual schools in the system. Sup. Andrekopoulos is using one of the core reporting forms from the value-added report. The district level report includes a simple 2-by-2 table of schools at the high, middle, and elementary levels. These tables lay out high and low attainment on the vertical axis and high and low student value added on the horizontal axis. The district is giving more autonomy to schools that show high attainment and high growth. Schools with low growth and attainment are being singled out and are receiving more direction from the central office, including the placement of "instructional facilitators" in each school. These facilitators report to central curriculum and instructional staff rather than to the local leadership.

It is important to note that this is not classroom or grade level value added. This is not a system that sanctions individual teachers. Rather, it focuses on research in the district on effective teaching strategies that correlates with improved student outcomes and seeks to implement these strategies in failing schools.

Chris

Thursday, May 18, 2006

And they're off - 2 states allowed to experiment with growth models

It seems that North Carolina and Tennessee will both be allowed to added growth models to their official NCLB compliance frameworks. It remains to be seen how this will actually play out. Tennessee will be using a proprietary model that has never had any rigorous external evaluation. North Carolina was just in the news this week for the major cost overruns and failures surrounding it's state-wide student data system.

Chris

Wednesday, May 17, 2006

Data Quality and the risks of "running with what we have"

There is a great temptation for education (and other) organizations to just "get something up" and call it a data warehouse as part of a strategy of retaining the support of senior leaders. One of the problems data warehouse designers have in organizations with little history of decision support is that the clients (program area staff) literally cannot identify needs that extend past their current experience with data. One common solution is to take the current operational data and its definitions (such as they are) and simply load them into a warehouse. One can then take existing reports as the design documents for data marts.

The good thing about this approach is that it provides a wonderful teaching environment for bringing program staff into the discussion using data and representations that they know and can make sense of. The risk is that they will see this and want to run with it. It is guaranteed that these data (and definitions) will contain serious quality problems there were not exposed or stressed under the older, more constrained reporting system. While this might seem like an early success, going forward with this system can be very risky. Program staff are experts in their programs. Data problems will emerge and they are likely to blame the system rather than the data or collection processes. They are also likely to see the problems as someone else's problem and not be receptive to requests that they "clean" the data. They may come back with requests for IT to fix the transactional system.

This is a cautionary tale for states working on getting their warehouses up as rapidly as possible.

Chris

Monday, May 15, 2006

The Ed Trust compares proposed growth models

The Education Trust has been very explicitly concerned about the risks of relying only on growth models for accounability. There is the concern that focusing only on growth rates will lower attainment expectation for the nation's neediest students. The Trust compares the stark differences between the Alaska and Arkansas plans in their article and provides a table that compares the strengths and weakness of all of the proposals. This is an easy and informative read for those not prepared to wade through all of the propoals themselves.

Chris

Friday, May 12, 2006

Ed Week delivers research report on data use

An excellent survey of the lay of the land at the school, district, and state level with respect to data use and decision support.

Chris

Thursday, May 11, 2006

Exclusion of small subgroups annual AYP reporting

Here is one of those cases in which the reliance on a single metric to both provide feedback to the instructional system and, at the same time, hold folks accountable bites one in the posterior. The use of confidence intervals around measures with low sample sizes is a critical part of interpreting statistics. Most statistics text books cite a sample size of 15 as the minimum number justify the assumption of normal distribution. Given the measurement error associated with traditional standardized tests, sample sizes of 2-4 times that number would not be an unreasonable standard for reporting.

An inaccurate measure is fine for providing instructional feedback. Teachers have a wealth of information about student performance. The high stakes test result is just one observation among many. Low reliability is not a problem in this case. However, for high stakes, the use of low sample sizes (and the resulting lack of confidence in the point estimate) creates serious concerns about the consequential validity of any sanctions or rewards based on that measurement.

Chris

PS This Week in Education also has some good links on this story.

Wednesday, May 10, 2006

Back from my 20th year grad school reunion

No computer on the road! Two weeks with no email or phone calls! Working though a large backlog of stuff to post.

Chris

Saturday, April 22, 2006

Technical documentation of value added metrics in the UK

Whoa baby! Are those some wacky value-added metrics. The pretests are based on averaging across math, science, and reading. What in the world does that measure? Post tests are based on the highest eight scores on a battery of tests - so, individual post test scores aren't even based on the same tests.

I guess some of the state growth models being suggested in the U.S. don't look that bad.

Chris

Wednesday, April 19, 2006

Long and Short Decisions - We need flexible tools

Engineers without Fears point out an important consideration for those building decision support systems for educational leaders. The large temporal grain size of state data systems lead to the creation of systems to support "long" decisions - those problems that allow time for consultation and consideration. The problem with this notion is that leaders a the district or school level often have to make "short" shoot-from-the-hip decisions and have little structured information to consider when the time to decide arrives. Many leaders will have to rely on the resources provided by the state - even though they were not provided for that purpose and may be unreliable in a "short" context.

The creation of robust use cases that help design "long" decision resources need to be accompanied by cases that show the appropriate use of such data in "short" situations. These cases should also outline how data likely to be available to local decision makers can be combine with state data to make better decisions.

Chris

Monday, April 17, 2006

How do modern leaders view social capital?

I like the stuff at Connectedness. Bruce does a good job summarizing Brass and Krackhardt and their description of leadership under strong and weak tie constellations. The simple examples do a good job laying out career-stage and other important considerations. This sort of analysis is vital part of work engagement we do in working with state agency and school district program staff and administrators. Without understanding leaders' network strategies (often unarticulated) it would be difficult to intervene successfully.

Chris

Friday, April 14, 2006

National Transcript Center aims to deliver commercial transcript transmission solution

Ok. I guess preventing fraud in transcripts is important. That wouldn't quite be might my first priority. The nirvana I seek is the ability to use transcript data within a district to look at students and their opportunity to learn. It's tough to understand differential outcomes of kids if all you know is their characteristics and maybe the teachers they had. We all know that scope and sequence matters when designing curriculum. It follows that we need to know this stuff to figure out if curricula, PD efforts, new approaches to educational guidance, etc. are working or not. That should be what is driving automation of transcripts.

Chris

Wednesday, April 12, 2006

Teaching leaders to see social capital

As we consider methods for intervening in complex education organizations to improve decision making, I have been looking for resources to provide a wide range of metrics. It's clear that we will be able to improve standard report production efficiency for test data and other high-demand, high-cost products. However, it will be vital to demonstrate the payoffs to units and leaders without direct connections to large scale data warehouse efforts. Social network analysis may provide insights into the structure and effectiveness of data sharing and expertise networks that could radically improve agency performance.

Chris

Monday, April 10, 2006

One of the states being considered for permission to use growth models discusses the prospect

The state superintendent's chief of staff downplayed the state's earlier criticism of NCLB. This is the tack that several states have pursued recently. Many people believe the expectations embedded in the legislation is unrealistic and unachievable. The sense seems to be that it will crash on its own illogical requirements. Instead, it makes sense for states to work on more realistic systems to study growth.

Chris

Saturday, April 08, 2006

The fundamental problem with "over the bar " NCLB-style reporting from a district teaching resource-poor children

This quote nails one of the primary problems with NCLB reporting. For schools or teachers working with resource-port children, attainment models can be fundamentally disenfranchising.
'I don't fundamentally disagree that we should be held accountable, but you have to look at where the students are when we get them and measure the growth by the time they leave you,' Young said. 'None of the growth that occurred below the bar is counted.'
As much as anything else, it is an issue of respect. People want to have progress acknowledged. A feedback system that says only this single point will be a legitimate reference for success is severely limited.

Chris

Thursday, April 06, 2006

Examing the role of Open-source at Economist.com

Open source models is one of the policy issues on our plate as we work with multiple states and districts on decision support tools, data warehouse designs, and statistical models. The costs of proprietary tools and the limits we see in their designs beg the question if open source approaches might improve system performance and approvide more accurate information. This will threaten groups that rely on the proprietary nature of their work as a major component of their business model.

Chris

Tuesday, April 04, 2006

Spellings speaks at educational data summit

While the Secretary of Education seems to be surprised that we make educational decisions based on anecdotal data, she states the problem pretty well. Federal programs that have for years encouraged stove pipe system development and vertical accountability are a primary enabler of this situation. The following quote does, however, indicate how out of step education is with the rest of the working world.

"It's hard to believe that we're just getting started in this endeavor in education. In other fields we expect standards and evaluations as a matter of reaching a diagnosis to correct problems," Spellings said. "Without data and information and sound decision making, it's basically guess work."

Chris

Saturday, April 01, 2006

DoEd Growth Models Candidates Announced

Of the 13 states who applied to begin using growth models as a part of their NCLB compliance programs, the following 8 - Alaska, Arkansas, Arizona, Delaware, Florida, North Carolina, Oregon, and Tennessee - have been forwarded on to the review committee for consideration. There application materials of these states can be found at the following link.

It will be interesting to see what the press, interest groups, and scholars of growth modeling make of this. Much of the talk among modeling experts is that most of these models don't actually have much to do with growth. The restrictions imposed by the RFP make it more difficult for any state to accomplish much that useful with this effort.

Chris

Thursday, March 30, 2006

Schools 'forced to behave like supermarkets'

The UK has moved to value added reporting of secondary school outcomes. What the author seems to mean with the title of the piece is that the pressure to "play the game" will encourage secondary schools to push students to take more challenging exams and provide less support to their feeder primary schools. If the primary schools do relatively worse, the secondary school will be shown to be adding even more value.

This is exactly the soft of perverse incentive structure that opponents of value-added measures fear.

Chris

Tuesday, March 28, 2006

Diagnostic assessment and learning

One of the issues we are tracking as a part of a "value-added" approach to educational improvement is the role of diagnostic assessments. There is surprisingly little written about the role of interim/formative/diagnostic assessments other than as useful markers for identifying gaps in student knowledge that need to be addressed in order improve performance on high stakes assessments. However, diagnostic assessments also provide feedback to teachers about how well they are doing implementing their curriculum - providing ongoing professional development.

Chris

Saturday, March 25, 2006

NCLB backlash

This piece does hit most of the points that opponents of NCLB list as it faults. It also points to pressure states are putting on the U.S. Department of Education and their respective congressional representatives. State-level politicians will not be able to support a system that identifies schools seen as broadly successful .

Chris

Thursday, March 23, 2006

NGA Center for Best Practices

On February 2nd and 3rd, 2006 the NGA Center for Best Practices co-hosted a conference entitled "By the Numbers: a National Education Data Summit". The other co-sponsors were, the U.S. Department of Education, the Florida Department of Education. Other partners included the Data Quality Campaign, Bill & Melinda Gates Foundation, Lumina Foundation for Education, Alliance for Excellent Education, and the Florida Channel, WFSU.

The stated purpose of the meeting "was to develop a shared vision for effective, comprehensive K-16 data systems and how policymakers can use data from these systems to develop policies to improve educational outcomes." Speakers were drawn from the partners as well as national labs, universities, and state educational agencies. The topics included a wide range of issues - from mapping data linkages between systems to exploring why similar looking school perform differently.

The overwhelming sense I get from this fairly comprehensive that list is that folks are skirting the "e" word - evaluation. Questions of "what works?" or "what is the most effective strategy?" are not naturally addressed by operational data collected by most school systems. One of the difficulties we are encountering is the lack of appreciation for - or even an understanding of the requirements of - evaluation.

Chris

Wednesday, March 22, 2006

Commentary on Illinois resetting proficiency cut scores on ISAT

Illinois, like many other states, regularly resets cut scores for establishing proficiency levels on state tests. Wisconsin's Department of Public Instruction provides a resource page that describes the procedure used in Wisconsin to set cut points (done in 1997 and again in 2003). It also provides references to many other states that use the same procedure. There is little doubt that stock high stakes tests don't map well onto learning standards set by individual states. T

here is a tension between appropriate "scope and sequence" and what the test measures. Some states may require local history to be taught in grade 6, but the externally purchased social studies exam focuses more on U.S. history. The proficiency scores would need to be set that provided a fair report on students' opportunity to learn the content on the test.

This article points to the other tension in this setting. There is a natural incentive for group setting cut points to make NCLB requires "a little more achievable" by lowering expectations. Even if that is not the intent, the threat of high stakes can make outsiders question the motivation of committees charged with this work.

This is part of the price we pay for federalism. Local control at the state and local level means that local constituencies have more control over what gets taught - even as they have less control over what gets measured.

Chris

Monday, March 20, 2006

Colorado's Education Commissioner takes stock of where we are with high stakes testing

Colorado's Education Commissioner William Moloney declared that standardized testing and longitudinal analysis have at last established a beachhead in Colorado. He does a good job showing how education reform based on rigorous testing was actually introduced by several Democratic governors back in the 1990s. Bush, in his words, "trumped them all" with NCLB.

The point I take from all of this is that this stuff is really hard. We are nearly a decade past the "call to action" he cites and most states still cannot tell what courses individual students have taken or what teachers taught which kids. High stakes tests currently only hold kids accountable in most states and districts. Getting to the place where we can see what works to close gaps and provide predictive support for change will take a considerably more work.

Chris

Friday, March 17, 2006

WCER Value-Added Research Center lays out its work plan

The Tri-State Longitudinal Data System project had its formal kick-off on February 21st and is working with the program office at the NCES and the project managers in the three states to carve out the best opportunities for collective action and sharing.

It has become clear that one of the things we (WCER staff) need to do is to write. One of the important deliverables for this project is to disseminate what we know - what works, what doesn't work, what we cannot do by ourselves, etc. This working paper is one example. It was drafted for a UNESCO-sponsored International Federation of Information Processing (IFIP) working group meeting. Chris Thorn, LDS co-Principal Investigator, is on the executive of the Working Group on IT in Educational Management. This group of scholars and practitioners from around the globe meet every other year to present their latest work, network with researchers doing parallel work in other settings, and explore opportunities for international collaboration.

The working paper provides a quick overview of the LDS work and a quick assessment of where knowledge management technologies might play a role in improving educational decision making.

Chris

Wednesday, March 15, 2006

Enhancing Predictive Analytics in the Enterprise Through Location Intelligence

This is a little blue sky, but given good data one could imagine doing predictive analysis of locating a new school (or redistricing). Given good value-added models for classrooms and students it would be possible to play out scenarios that model school performance.

Chris

Monday, March 13, 2006

Trade-offs in security, performance, and ease of use in centrally hosted student information systems

This piece lays out a number of issues but most clearly hits the tradeoffs between locally and centrally hosted systems. The impact of differing security policies also arises. Differing remote access and password policies actually rob much of the functionality of the system - forcing one district to move to local hosting of the data system. This mismatch in security policy eliminates one of the most important aspects of the system - remote access.

Alignment of policies and a clear understanding of the payoffs and costs at each level of the organization needs to be in the forefront for all of the players throughout the project. This clash of policies and needs should not have been a surprise.

Chris

Saturday, March 11, 2006

LPA/NCREL contributions on value-added analysis

I cited RAND's work on value added analysis last month. Another group that is looking at both the technical and policy implications of doing rigorous evaluation of student learning is NCREL/Learning Point Associates (aka Castor & Pollux of education services in the Midwest).

Also see volume 16 of their policy issues publication for more on state-level educational data systems. This is particularly interesting because it outlines the information needs of different groups across the educational system - from state policy makers to parents and community members.

Chris

Thursday, March 09, 2006

Total Cost of Ownership in a Data Warehouse

Even though it is a couple of years old, Ralph Kimball's piece on the cost of ownership and the focus on the end user.

Kimball's top 3 points are enough to delay or derail any data warehouse project:
  1. Data needed for decisions is unavailable
  2. Lack of partnership between IT and end users
  3. Lack of explicit end-user-focused cognitive and conceptual models
Number one is the big stumbling block state education agency folks face. The barriers between silos make it difficult to know what data is already available (or if available, the right to use the data may not be clear). Even in those areas in which data is available at the individual level (student assessment, special education, vocational programs, etc.), there may be no way to tie the data to other programs of interest since data for many programs is only collected at the aggregate level.

The second barrier is one that is begin overcome by the demands of NCLB. The requirements for testing all students, reporting on subgroups, etc. are pushing programs to share data and ask hard questions about impact and professional development payoffs. This sense of urgency to figure out what works, may be the most long-lasting impact of NCLB on educational systems.

The third barrier is one of sensemaking. How do those responsible for making program decisions make tough decisions? In the past, there has been very little dialogue between state-level program managers and regional service providers or local district staff. One of the specific goals of the Longitudinal Data System grants is to do needs assessment and requirements gathering from across all levels of the education enterprise.

Chris

Tuesday, March 07, 2006

Proactive Data Quality - Demming revisited

DM Direct Newsletter , February 24, 2006 Issue, by Ken Karacsony reminds us that the stuff Demming told us about auditing for defects - not an efficient use of resources - remains true with the product is information/data quality. Quality has to be the goal of everyone on the job. Detecting data anomalies and sending them back for "cleaning" to the unit (in our case a school or district) is not an efficient approach. It does not address the root cause of the quality problem.

Inspection does not improve data quality; it only tells you that there is a problem. Cleansing the data after the fact does not remedy the problem - it only masks the problem. Companies are spending millions of dollars on initiatives to detect and cleanse data rather than applying the resources to actually improve the quality of their information. The best way to improve data quality is to produce quality data.
Data quality has to be part of individual accountability. It has to be sold as an efficiency issue. It has to be sold as part of doing a quality job.

These principles can and should be applied to school data.

Chris

Monday, March 06, 2006

School CIO suggests doing a data makeover (registration)

This seems a little crazy to me. School CIOs should be the last people one needs convince that data quality is the most vital component of school decision support. If this is the level of a magazine pitched at "CIOs", the systems being fielded must be pretty scary.

Chris

Friday, March 03, 2006

Why don't we have buyers' guides for higher education?

USA Today reports on DoEd Spelling's concerns about picking a college for a child. There's loads of information on the ammenities and social life, but almost nothing on how successful the school is on helping students learn, graduate, and get good jobs or graduate school placements.

This concern suggests that parents are going to expect value-added assessment from colleges and universities at some point. Indeed, there are some suggestions that congress may step in and require this.

Chris

Thursday, March 02, 2006

What a great guy - and smart too.

shameless plug. I've been involved in the International Federation for Information Processing Working Group on IT in Educational Management for the past 5-6 years. This is my second piece to come out in their edited volume. It addresses the role of web-based collaboration tools in supporting the work between distributed partners in systemic education reform.

Chris

Tuesday, February 28, 2006

Working with growth data can be scary

Pennsylvania Value Added Assessment System is delivering assessment results to the approximately 100 districts participating in the optional program. Scranton's special projects director responded to requests for data with the following quote, "I'm not really comfortable quoting statistics," he said. "We haven't really learned how to work with the data." This comment suggests that the difficulty of training teachers and administrators to use more sophisticated is likely to be substantial. These comments come from a professional who has dealt with annual test results before. As the project director for the district, he finds them overwhelming. Just imagine how classroom teachers or parents must feel.

Chris

Monday, February 27, 2006

Social Network Analysis in VARC Design

One of the problems we run into when working in complex silos is that connections between people, data resources, and analytical needs are not always obvious. It's also the case that solutions to important policy problems often run at odds with the bureaucratic structure. Social Network Analysis can provide a window into the non-obvious paths within (and between) organizations.

The following questions are typical of a network survey done as a diagnostic:
  • To whom do you typically turn to for help in thinking through a new or challenging problem at work?
  • To whom are you likely to turn to in order to discuss a new or innovative idea?
  • To whom do you typically give work-related information?
  • To whom do you turn to for input prior to making an important decision?
  • Who do you feel has contributed to your professional growth and development?
  • Who do you trust to keep your best interests in mind?
The answers can be used build a network understanding of the organization that gives a very different picture of how people get things done outside traditional pathways. It can also show barriers that are not apparent within silos.

Chris

Saturday, February 25, 2006

State educators drop IBM contract

This is an interesting note of caution for firms doing development of transactional and longitudinal systems with state partners. States have the capacity to just walk away from deals. It is often a political necessity. Given the costs associated with large external contracts, it can be politically expedient to take over large projects. I think we will start to see many states and larger districts drop expensive data warehouse and decision support tools in favor of roll-your-own solutions using Oracle or Microsoft tools. The labor costs for out-year maintenance and updates will be far lower. More importantly, state agencies are figuring out that it is critically important to have the human capital to design and build these tools in house. Those skills are a crucial part of organizational improvement efforts. Outsourcing that work can create a critical skill gap.

Chris

Thursday, February 23, 2006

Selection of Growth Model proposal reviewers announced

The U.S. Department of Education has announced the names of the peer review group who will make recommendations to the secretary on which states should be included in the growth model demonstration group.

Chris

Wednesday, February 22, 2006

Balanced Scorecard in Educational Reform

This is an interesting presentation from the CPRE conference in November 2004. The authors present a very interesting overview of the incentive structures provided to teachers and building leaders. These performance incentives are tied to metrics clearly spelled out in the district's balanced score card. There are also recuitment and retention incentives.

This may be the kind of enviroment in which value-added analysis could flourish. It would also be an environment that could lead to the the abuse of such analysis if it was applied inappropriately to individual professionals - violating the assumptions of the underlying models.

Chris

Monday, February 20, 2006

Data warehouse benchmarks

Recent article from DM Review reports survey of 454 firms engaged in different forms of business intelligence and data warehouse implementations. In particular, the authors focused on the success characteristics of different implementation models. They surveyed 20 DW experts go get a set of metrics. They characteristics fall out into the following categories:

Product Measures
  • Information quality: The data warehouse should provide accurate, complete and consistent information.
  • System quality: The data warehouse should be flexible, scalable and able to integrate data.
  • Individual impacts: Users should be able to quickly and easily access data; think about, ask questions, and explore issues in new ways; and improve their decision-making because of the data warehouse and BI.
  • Organizational impacts: The data warehouse and BI should meet the business requirements; facilitate the use of BI; support the accomplishment of strategic business objectives; enable improvements in business processes; lead to high, quantifiable ROI; and improve communication and cooperation across organizational units.
Development Measures
  • Development cost: The cost of developing and maintaining the data warehouse should be appropriate.
  • Development time: The time to develop the initial version of the data warehouse should be appropriate.
Respondents were asked to respond to a series of detailed questions that probed how successful their efforts had been on the measures described above. One interesting finding was that three of the 5 architectures included in the study scored almost identically. Responding to the following list:
  1. Independent data marts,
  2. Bus architecture with conformed dimensions (bus architecture),
  3. Hub and spoke (i.e., Corporate Information Factory),
  4. Centralized (i.e., no dependent data marts), and
  5. Federated.
Respondents identified independent data marts as the least successful strategy, followed by federated models. The remaining models all scored equally well across the success measures. This goes right to the heart of data warehouse architecture wars. The data does not seem to support arguments that any of the extreme positions on proper DW architecture are well founded.

Chris

Saturday, February 18, 2006

Higher Ed and lifting student achievement

The Association of American Colleges and Universities journal Peer Review published a piece by Carol Geary in 2002 that outlines what would be required to implement value-added analysis for measuring growth in student knowledge across a 4 year education. The primary requirements would be a consistent understanding of the core curriculum required for a bachelor's degree and a move to performance testing that would show growth of knowledge.

One of the ironies of undergraduate education is that colleges use placement tests across the student body to test for basis skills or for advanced placement early in the process and then not at all after that. There is no capstone test.

We have a similar situation in high schools in many states. There are tests in 9th or 10th grade that measure whether student are performing at the required level, but no efforts to measure what student know when they graduate. The value added by high school is mostly unmeasured.

Chris

Friday, February 17, 2006

Columbus, Ohio Public Schools Considers Value-Added Teacher Compensation

The school district and the teacher's union have a draft memorandum of understanding that would place all newly hired teachers under this new pay plan. It would also allow any existing paper to opt in. There is a notion of career steps in the value-added compensation model, but it is not the lock step of the current system.

Chris

Wednesday, February 15, 2006

Can test vendors keep up with NCLB demands?

It does seem that likely that NCLB demands on test vendors has outstripped their capacity. It is difficult to make tests that stretch high achieving students. Creating high quality tests is expensive and difficult. Sometimes known as the ceiling effect, the inability to measure student learning at the high end can artificially create a situation in which it appears the the gap between two groups of students is closing. Since the upper end of the distribution of scores is bounded, the mean for a higher performing group cannot move up as far or as easily as a lower performing group.

Chris

Tuesday, February 14, 2006

Tutor Program Offered by Law Is Going Unused - New York Times

Students in schools identified as failing under NCLB are eligible for free supplemental services. Across the country many districts and states have very low numbers enrolled, despite fairly aggressive advertising. Lack of facilities, qualified instructors, and many other reasons are cited as possible barriers.

One other important aspect of the support services efforts is mentioned only briefly. There is very little evidence about what is working in support services. There is no serious evaluation of going on. Researchers at the Value-Added Research Center at UW-Madison are working with Milwaukee Public Schools to design and implement an evaluation of these services to help district and parents chose the most effective mix of supports.

Chris

Sunday, February 12, 2006

Value-Added Assessment in Higher Ed

Richard H. Hersh, Senior Fellow at RAND, delivered a paper at the AAHE National Assessment Conference, Denver-June 15, 2004 entitled Assessment And Accountability: Unveiling Value Added Assessment In Higher Education. In the paper Hersh does a good job laying out the rationale for rigorous assessment in higher education. He does a good job anticipating and addressing many of the concerns colleges and universities are likely to have with such a proposal. In the paper Hersh deals with:

  • Phase I: Experimentation, Incentives, and Rewards
    • Faculty need a chance to address assessment efforts as a scholarly effort. Pick a program to evaluate and use the process to help faculty work throught the entire process of creating expectations, implementing the curriculum as intended, and measuring outcomes consistently.
  • Phase II: Development and Diffusion
    • Transparency in programs and outcomes could be particularly important to state-funded universities. In states with tight budgets and declining support for higher education, clarity around the value added by university and college education could be a powerful force for supporting continued investmente. (The same goes for K-12 education.)
  • Phase III: Comprehensive Assessment System Development and Implementation
    • Assessment in general education and within majors collected within institutions. Samples of data could also be shared across institutions (within systems for example) to allow a school to evaluate how a particular program implemented elsewhere might affect outcomes locally.
  • Phase IV: Value Added Data Used to Inform Institutional and State Policy
    • Faculty reward structures (particularly in non-research institutions) could be adjusted to reward programs and practices that delivered higher student value-added outcomes. It would also be possible to evaluate different institutional forms (traditional liberal arts, more applied programs, virtual schools, etc.).

Chris

Thursday, February 09, 2006

Looking for Scholarly Support for Value-Added Assessment

There is still a small body of work on the appropriateness of value-added assessment. AERA's policy series Research Points published a piece in the summer of 2004. The recommendations are in line with any reasonable methodologists thinking understanding. Value-added assessment is likely to be more fair to all involved, but it is still tough to make high stakes decisions for individual teachers and kids given the "uncertainty inherent in measurement".

Chris

Tuesday, February 07, 2006

Data overloading and model development

Having worked with several large districts (and more recently several state programs), it clear that data overloading (additional non-standard field values) is a common problem. DM Review describes the problem this way:
If a database has not been defined with all knowledge workers' information requirements, and that database is not easily extendable, knowledge workers will often use an existing field for multiple purposes.
A common example is the Free/Reduced Lunch program participation field. A program administrator at the district or state level needs to report out the total number of children in each category. Legitimate values for the field may be "F" and "R". However, one office in the district is charged with adjudicating cases in which the family is close to qualifying or mistakenly enrolls when they are not eligible. They are responsible for tracking those denials and enter the letter "D" in the Free/Reduced Lunch field to allow them to run reports at the end of they year. This use of the variable was never anticipated in the new student management system and is purged when the data is loaded to the student data warehouse - erasing data vital to the group who entered the values.

Data overloading is one of several data quality issues that will have to be confronted as districts and states move from reporting annual and aggregate data to longitudinal analysis of individual-level data.

Chris

Sunday, February 05, 2006

Data Quality Campaign and Data System Goals

Fundamental in designing a longitudinal data system

It is clear that these elements are necessary but not sufficient for a robust longitudinal data system. Listed below are other fundamental issues to address when designing a longitudinal data system:
  • Privacy Protection: One of the critical concepts that should underscore the development of any longitudinal data system is preserving student privacy. An important distinction needs to be made between applying a "unique student identifier" and making "personally identifiable information" available. It is possible to share data that are unique to individual students but that do not allow for the identification of that student.
    • These practices are well understood outside of education. This is probably the easiest barrier to overcome.
  • Data Architecture: Data architecture defines how data are coded, stored, managed, and used. Good data architecture is essential for an effective data system.
    • It would be tempting to simply adopt dictionary standards being developed by the SIF group, NCES, or other standards groups. What this misses is the unique accounability and political organization of each state. One size will not fit all states.
  • Data Warehousing: Policymakers and educators need a data system that not only links student records over time and across databases but also makes it easy for users to query those databases and produce standard or customized reports. A data warehouse is, at the least, a repository of data concerning students in the public education system; ideally, it also would include information about educational facilities and curriculum and staff involved in instructional activities, as well as district and school finances.
    • This is still a relatively new area of work in IT. Many of the turn-key "warehouse" systems are still driven by a compliance reporting mindset that only integrates data for reporting up - not for cross program analysis and improvement.
  • Interoperability: Data interoperability entails the ability of different software systems from different vendors to share information without the need for customized programming or data manipulation by the end use. Interoperability reduces reporting burden, redundancy of data collection, and staff time and resources.
    • SIF is the leading effort in this area. Again, the reduction of burden is very likely the best argument to use for leveraging bureaucratic resistance. The long term payoff will be the ability to study program effectiveness. The increases in student learning, professional development alignment, and feedback to teacher education instiutions will likely be far greater.
  • Portability: Data portability is the ability to exchange student transcript information electronically across districts and between PreK-12 and postsecondary institutions within a state and across states. Portability has at least three advantages: it makes valuable diagnostic information from the academic records of students who move to a new state available to their teachers in a timely manner; it reduces the time and cost of transferring students' high school course transcripts; and it increases the ability of states to distinguish students who transfer to a school in a new state from dropouts.
    • This is a long term goal. Most districts in Wisconsin still have no consistent transcript data online.
  • Professional Development around Data Processes and Use: Building a longitudinal data system requires not only the adoption of key elements but also the ongoing professional development of the people charged with collecting, storing, analyzing and using the data produced through the new data system.
    • This will be the largest area of cost. The resources need to train SEA, school and district staff dwarf the costs of developing the systems.
  • Researcher Access: Research using longitudinal student data can be an invaluable guide for improving schools and helping educators learn what works. These data are essential to determining the value-added of schools, programs and specific interventions.
    • We only now entering an era in which research about what works in schools can be done at scale. This work has the potential to transform education and education research.

Chris

Thursday, February 02, 2006

Implications of a Service Oriented Architecture

One of our partners on the Tri-State Longitudinal Data Systems project (Minnesota) has already begun to implement an agency wide initiative based on a Service Oriented Architecture. Their data dictionary site also reflects their approach to standards and interoperability. All dictionary elements are publicly available as are the training and internal marketing presentations used to educate and energize agency staffers. Wisconsin is moving in the same direction. The Department of Public Instruction will be able to leverage much of the work done by Minnesota. This collaboration will free Wisconsin to take the lead on other areas of work. It is the culture of data stewardship and data quality - in use - that will be one of the most important contributions of this effort.

Chris

Tuesday, January 31, 2006

Examples of big payoffs available to program improvements

Current teacher turnover levels costs Illinois districts $224 million a year. This is the sort of number that a really solid evaluation plan could turn into a point of leverage. CPS would be an ideal place to do some quasi experiments on the efficacy of different induction and retention programs.

Chris

Sunday, January 29, 2006

The State's Role in School Improvement

The Education Commission of the States provides a policy brief School Restructuring Via the No Child Left Behind Act: Potential State Roles that does good job in laying out a range of strategies for states that include fairly hands-off supporting policies to very activist models including school takeover.

Chris

Friday, January 27, 2006

Data Quality Campaign and its 10 Criteria

What a list. Some of this is going to be very hard to accomplish.
  1. A unique statewide student identifier
    • Mobility makes this tough. State to state transfers increase the difficulty enormously
  2. Student-level enrollment, demographic and program participation information
    • Program participation remains tough to follow. Much of the money flows down to schools. Without the data processing capacity to track the relationship between teachers, students, and courses - something most district lack - this is impossible. Many programs (state and federal) require only counts by school and grade. There has never been an incentive to track individual student participation - with a few exceptions such as special and vocational education.
  3. The ability to match individual students’ test records from year to year to measure academic growth
    • Many states are now doing this. NCLB was a wakeup call. Still, many states are brand new at this and will take several years to build up useful longitudinal data.
  4. Information on untested students
    • Ideally many distrticts and states are going a step beyond this with induction testing for children who move into the district outside of the high stakes testing window. It is vitally important to make sure that all student are tracked and that schools are held accountable for all kids.
  5. A teacher identifier system with the ability to match teachers to students
    • Without this link it is impossible to provide teachers with the characteristics of their incoming kids. This like is also necessary if one is to study the effectiveness of a particular curriculum given varying levels of teacher experience and/or training. The tie between teacher and student is at the core of the "production function" of education.
  6. Student-level transcript information, including information on courses completed and grades earned
    • This data documents mastery of required content and shows students' opportunity to learn at a system level.
  7. Student-level college readiness test scores
    • While this seems like a good idea, it is tough to administer tests with no stakes attached. Many districts have attempted to use exit exams. There are mixed results.
  8. Student-level graduation and dropout data
    • This remains a serious measurement problem. Dropouts are the measurement of a non-event - a student no longer attends school. We don't know what happened - only that we don't see them any longer. This makes it tough to fix the problem. We don't know if it was a fairlure of the educational system or some other influence.
  9. The ability to match student records between the PreK–12 and higher education systems
    • Student outcomes after high school may be the best way to measure PK-12 productivity. Entrance and placement exams may be the most useful data available.
  10. A state data audit system assessing data quality, validity and reliability
    • It is possible to do technical audits for compliance, but to get this story right one would have to combine this with direct observation to see if reported data aligns with what one observes in the field.