Thursday, August 31, 2006

Californina has spent $70 million to almost be able to track students over time

So, $70 million doesn't buy as much as it used to. This article claims that state officials are knowingly underreporting dropout rates to avoid federal sanctions. California is no exception in its difficulties to build a longitudinal system for tracking students and their outcomes. As the author points out, only Florida and Texas have truly robust systems. Other states, such has North Carolina, have had hugely expensive failures. A new bill has been introduced for this year's legislative session that would fund the completion of the system. But, even some of that bill's backers - namely the teachers' union - does not support the next logical step of linking teacher IDs to students.

Chris

Sunday, August 27, 2006

The Plain Dealer worries about increasing complex school report cards

The increasingly complex school report cards and the looming introduction of the "mind boggling" value-added analysis to the Ohio accountability system has folks worried that the will not be able to use or trust the new information. This is a real problem for NCLB. The bar on accountability is going up, but Joe Public doesn't know how to read the instructions for operation.

Chris

Friday, August 25, 2006

Money makes the world go around, the world go around,....

Money may be an important fix. Very high salary levels would ramp up competition for slots that would squeeze out mediocre candidates. I'd like to see the math on that on what sort of wage would encourage enough people to retool for teaching.

Chris

Thursday, August 24, 2006

Higher education and value-added - check the discussions

When I first found this, I thought it was an interesting piece on the logic and difficulties of measuring the value added by a multi-path undergraduate experience. However, the discussion in the comments is even more interesting. Supporters and detractors present interesting cases that push the limits of where value-added can and cannot be applied. Attainment is still the goal. That's no different in higher education. However, the difficulty of coming up with useful (comparable) metrics is a tough problem. Generating the motivation to take any post-test seriously is another mountain of an obstacle - even if one could conceive of an appropriate testing regime.

Chris

Sunday, August 20, 2006

Knox County, Tennessee Schools (and the rest of the state) dodge some bullets with growth models

As predicted, the introduction of growth metrics as an alternative method for achieving adequate yearly progress has reduced the number of schools identified as failing. 8 schools met AYP using the new growth standard. Overall the number of schools not making AYP dropped from 159 to 96. More details about the response to being on the failing lest are available here.

Chris

Friday, August 18, 2006

About that "weighing the pig" analogy....

Many of you know the "you don't make the pig grow by weighing it" analogy and have heard it applied to NCLB testing requirements. That's bugged me for quite some time as a pretty weak statement. I'm not a fan of testing for no reason and I hate tests that are not aligned to the education system's learing goals. Poorly aligned tests provide the worst sort of incentives and almost no vaulable feedback.

However, I still have a problem with this analogy. Let's think this through. I wouldn't want to compare education to fattening a pig for market. What do we know about pigs?
  1. The growth trajectory of a pig from birth to slaughter weight is a pretty well understood piece of meat science. For more info specialization in this industry see the USDA background brief on hog farming.
  2. Pigs have range of growth rates that is well understood. The grow at well understood rates throughout different periods of their maturation. The farmer only needs to provide food and water.
  3. While weighing the pig does not make it go fatter, you better believe the farmer weights the inputs to pig growth (feed) very carefully. Farmers know very specifically how much of which kinds of nutrient sources a pig needs throughout its life cycle. It is in the farmer's interest to do this well and consistently through all stages of growth.
  4. The growth characteristics of a pig are inherent to the pig. The only time the pig needs to be weighed is at time of sale to get the overall purchase weight. If the farmer does the production right, the pig will weigh what it is supposed to weigh at the time of sale. The weighing is about the commercial exchange between farmer and buyer.
What about student learning?
  1. The growth trajectory of child learning is known and taught in child development courses, but we also know that there is great variance. We don't just want to get a child to some arbitrary amount of learning based on the child's characteristics. We have societal norms about what is expected for participation in a democratic society.
  2. The growth of children is affected by the peer group. Pig A does not grow faster or more slowly if Pig B is a slower or faster learner. Peer effects in classrooms and schools is profound. One could think of this as one of the inputs to learning that we don't know much about.
  3. The pig will grow if the farmer is poor or rich, black or white, if she likes or does not like pigs, etc. Kids learn better if they have supportive parents, come from homes with adequate resources, speak English well, etc. The thing pigs bring to the pen don't affect the pig's growth (apart from genetics around size itself).
  4. We know almost nothing about the inputs to education - at least compared to our farmer. When we look at how much a student has learned, an enormous number of factors enter in to explain the variation one sees across the population. We collect almost none the data one would need to understand that variation.
  5. The one thing we seem to have in common is the notion of the buyer weighing the pig/student. We publish test scores for schools and districts as a way of telling the buyer (taxpayers and voters) what they got for their investment in public education. That's about the only similarity I can see. Particularly since, we as citizens have already paid for the education at that point. This is a retrospective look to see what we eventually received for what we paid up front.
All in all, I don't see that the analogy holds much water.

Chris

Wednesday, August 16, 2006

eScholar Data Definitions - Embrace and Extend?

K-12 data warehouse vendor eScholar provided data definitions for 750 elements to the pK-12 Data Model initiative sponsored by the U.S. department of Education, National Center for Education Statistics and mapped to both the SIF and NCES data standards.

This is a bold move for eScholar, but might provide additional leverage to market to states and districts, since they could guarantee data element alignment out of the box. Microsoft has a followed the embrace and extend model several times. This happens when an influential firm adopts an open standard and then adds substantial features to that standard that can, however, only be used by folks using the "extender's" systems.

This will be an interesting development to follow.

Chris

Sunday, August 13, 2006

Critique and recommendations from the NEA as NCLB is considered for renewal

The NEA discusses some of the unintended consequences of NCLB such as:

  • Clustering of students with disabilities in separate pull-out schools to reduce the risk of failure for regular neighborhood schools
  • Special education teachers are often required to demonstrate highly-qualified status in multiple disciplines. Many states do not yet have workable regimes for making this possible
  • Alternate assessments were specified by NCLB, but only 10 states currently have them in place. It is difficult create quality tests of any sort. Regular education testing has exceeded the testing industry's capacity. Alternate assessments are even further behind
This is a pretty lucid piece of argument from Patti Ralabate presented by the NEA to the Aspen Institute's Commission on No Child Left Behind.

Chris


Friday, August 11, 2006

Welcome to the math of NCLB - many schools slide into "failing" status

In Vermont, schools categorized as failing climbed from 10 to 61. One of the causes of this five-fold increase was the inclusion of more children who had not been included in the testing system. This is the bane and boon of NCLB. The law requires that the schools be held accountable for teaching and testing all students. At the same time, the rising bar means that many schools will have to achieve gains in learning that may not be possible under any circumstances - particularly with the more need children.

State leaders cite growth models as a way of better reflecting what students are accomplishing on the path to proficiency. Many of the states that applied to the U.S. Department of Education's growth model pilot program cited reducing the number of schools on the "failing" list as one of the key motivators.

We had a study group here at WCER go through all of the applications for the growth model exception. While the is a reduction in the number of schools identified as failing in the short term, the attainment requirements don't get lifted. Under a growth model, it would simply put off the dramatic increase of schools identified as failing for a couple of years. At that point, the numbers would climb even more spectacularly.

That may actually be part of the strategy. The mechanistic application of the "failing school" label to the many, many schools that are actually doing pretty well for most students would likely create a huge political backlash against NCLB. I don't actually own a tin-foil hat, but this sort of logic would not surprise me.

Chris

Wednesday, August 09, 2006

Virginia pushes ahead with state-wide SIF Project

Edustructures announces the launch of the second phase of their state-wide SIF implementation with a Student Locater application. The list of states deploying (or considering) state-wide SIF include; Wyoming, Oklahoma, and Pennsylvania. Delaware, Nevada, and South Dakota support the submission of some data using SIF.

Vertical integration, the ability to include XML markup that is not part of the SIF standard, and a web-services reporting API seem to be important feature in version 2.0 of the SIF that will continue to make it a compelling element of system designs for the at least the near- to mid-term.

Chris

Monday, August 07, 2006

Center for Analysis of Longitudinal Data on Education Research Founded

Faculty from the University of Texas at Dallas are collaborating with the Urban Institute's Education Policy Center's prinicipal investigator, Jane Hannaway, as well as with scholars from Duke University, Stanford University, the University of Florida, the University of Missouri, and the University of Washington.

The researchers will be using state-wide databases from Florida, Missouri, New York, North Carolina, Texas, and Washington as the basis for their research. The research will focus on issues surrounding teacher quality (see Ed Week - subscription).

As Hannaway notes, this data will only be avaialble for secondary analysis to members of CALDER. FERPA concerns prohibit wider access. That's a bummer.

Chris