When this article ran in the NYT magazine, several months ago, I had a whole post planned out about one particular thread. Joe Nocera describes the evolution of Value at Risk - VaR - which was a system JPMorgan developed for measuring risk. It became the financial industry standard for measuring risk for a number of reasons: it gave a single number for the primary riskiness, JPMorgan developed it and then gave it away, and it gave bank regulators a simple thing to look at to figure out if banks were taking on too much risk.
Nassim Nicholas Taleb points out that there's a whole set of events out beyond the 99% of normal variation that VaR covered which, over time, became very significant; there were also several critiques of Nocera's article which pointed out that VaR assumed that prices essentially varied randomly, and couldn't account for real-world events that affected risk. I've lost the links to those articles, or I'd link to them - they were by actual economists who actually know things.
But here's what's more interesting to me than the near-collapse of the financial system: it's a problem that Nocera does cover, by summarizing what Till Guldimann, a former JPMorgan banker involved in creating VaR, told him:
"The big problem was that it turned out that VaR could be gamed. That is what happened when banks began reporting their VaRs. To motivate managers, the banks began to compensate them not just for making big profits but also for making profits with low risks. That sounds good in principle, but managers began to manipulate the VaR by loading up on what Guldimann calls 'asymmetric risk positions.' These are products or contracts that, in general, generate small gains and very rarely have losses. But when they do have losses, they are huge. These positions made a manager’s VaR look good because VaR ignored the slim likelihood of giant losses, which could only come about in the event of a true catastrophe."
In other words, the people who created the policy environment built incentive structures around a particular data point. So the people operating in the policy environment privileged that data point over all others. Turns out that credit default swaps look very good in a VaR model; turns out they also create huge systemic risks by entangling many financial actors into any particular problem.
Can anyone think of any other time this has happened? Like maybe in higher education, with a set of rankings? Or how about in K-12 education? Oh that's right, it's called "accountability." It's what NCLB would be doing, if it had more teeth.
We live in an age where people are very interested in data, and in a lot of ways that's great. We should try to figure out how to measure things: the same NYT article mentions a situation in which a recurring data point tripped some managers' attention at Goldman Sachs, and as a result they met, discussed the mortgage market, and decided to take on less risk. That's a good use of data. But blindly privileging any particular data point will leave any system vulnerable to being gamed. I guarantee you that there are schools out there that are figuring out how to game - not cheat, but game - the testing system. Some of those schools are also doing a good job on other things; others are focusing on specific tests, at a real cost to their students. My school tried to game the test by setting up a special academy for students they thought might make 'proficient,' and having higher behavior and academic standards for that academy. It may or may not have helped those students; it certainly didn't help anyone else.
The same thing is happening with Clemson University in the Inside Higher Ed article: some of the reforms they're making are good for their students, others are attempts to game the system, but none of them proceed from an honest evaluation of what would make Clemson a better university. It's schmality instead of quality, and I wish the data evangelists would be honest about the way a laser-like focus on data makes the pursuit of schmality worse.