Showing posts with label better living through social science. Show all posts
Showing posts with label better living through social science. Show all posts

June 8, 2009

the perils of data

When this article ran in the NYT magazine, several months ago, I had a whole post planned out about one particular thread. Joe Nocera describes the evolution of Value at Risk - VaR - which was a system JPMorgan developed for measuring risk. It became the financial industry standard for measuring risk for a number of reasons: it gave a single number for the primary riskiness, JPMorgan developed it and then gave it away, and it gave bank regulators a simple thing to look at to figure out if banks were taking on too much risk.

Nassim Nicholas Taleb points out that there's a whole set of events out beyond the 99% of normal variation that VaR covered which, over time, became very significant; there were also several critiques of Nocera's article which pointed out that VaR assumed that prices essentially varied randomly, and couldn't account for real-world events that affected risk. I've lost the links to those articles, or I'd link to them - they were by actual economists who actually know things.

But here's what's more interesting to me than the near-collapse of the financial system: it's a problem that Nocera does cover, by summarizing what Till Guldimann, a former JPMorgan banker involved in creating VaR, told him:

"The big problem was that it turned out that VaR could be gamed. That is what happened when banks began reporting their VaRs. To motivate managers, the banks began to compensate them not just for making big profits but also for making profits with low risks. That sounds good in principle, but managers began to manipulate the VaR by loading up on what Guldimann calls 'asymmetric risk positions.' These are products or contracts that, in general, generate small gains and very rarely have losses. But when they do have losses, they are huge. These positions made a manager’s VaR look good because VaR ignored the slim likelihood of giant losses, which could only come about in the event of a true catastrophe."


In other words, the people who created the policy environment built incentive structures around a particular data point. So the people operating in the policy environment privileged that data point over all others. Turns out that credit default swaps look very good in a VaR model; turns out they also create huge systemic risks by entangling many financial actors into any particular problem.

Can anyone think of any other time this has happened? Like maybe in higher education, with a set of rankings? Or how about in K-12 education? Oh that's right, it's called "accountability." It's what NCLB would be doing, if it had more teeth.

We live in an age where people are very interested in data, and in a lot of ways that's great. We should try to figure out how to measure things: the same NYT article mentions a situation in which a recurring data point tripped some managers' attention at Goldman Sachs, and as a result they met, discussed the mortgage market, and decided to take on less risk. That's a good use of data. But blindly privileging any particular data point will leave any system vulnerable to being gamed. I guarantee you that there are schools out there that are figuring out how to game - not cheat, but game - the testing system. Some of those schools are also doing a good job on other things; others are focusing on specific tests, at a real cost to their students. My school tried to game the test by setting up a special academy for students they thought might make 'proficient,' and having higher behavior and academic standards for that academy. It may or may not have helped those students; it certainly didn't help anyone else.

The same thing is happening with Clemson University in the Inside Higher Ed article: some of the reforms they're making are good for their students, others are attempts to game the system, but none of them proceed from an honest evaluation of what would make Clemson a better university. It's schmality instead of quality, and I wish the data evangelists would be honest about the way a laser-like focus on data makes the pursuit of schmality worse.

May 14, 2009

what are you going to do with that self-control?

I like this New Yorker article about self-control and meta-cognition, both of which are things I think about a lot (why am I able to delay just about anything, but sometimes totally unable to start things, like the report I should be working on right now?). They're also both trendy education research topics - see the typically ill-informed David Brooks piece on the Harlem Children's Zone, which provides a simple, elegant summary of why people worry about this: "The basic theory is that middle-class kids enter adolescence with certain working models in their heads: what I can achieve; how to control impulses; how to work hard. Many kids from poorer, disorganized homes don’t have these internalized models. The schools create a disciplined, orderly and demanding counterculture to inculcate middle-class values." The article in the New Yorker is about how self-control works (via the directed use of attention), how it affects the rest of your life (by making it easier to study, save for retirement, etc), and how people can learn it (by practicing the directed use of attention).

Which actually mostly reminds me of a conversation I had with Stupendous Fish and the Gardener two weeks ago over a lovely steak dinner. We were talking about cause and effect, and how there's actually a fairly narrow window in early childhood when you learn, easily, how cause and effect works. It's the period described in the Baby Scientists episode on This American Life. If you don't learn it then, you have to painstakingly assemble an understanding of it later in life. A lot of the students I worked with in wilderness therapy lacked this understanding, and as a result kept making the same decisions (just a little cocaine, run away from home, sleep with someone when you don't want to, skip school today) despite their dislike for the consequences of those decisions. Someone who understands cause and effect at an intuitive level is eventually going to realize that the way to avoid those negative consequences - fights with parents, arrests, drug addiction - is to stop choosing the things that create those consequences, and the kids who made that basic connection tended to do much better much faster. They'd gotten trapped in a pattern they didn't like, but as soon as they got some distance they could identify the pattern and start making different decisions.

One of the most common reasons that kids miss out on developing that understanding is that they're being abused in some way. One key feature of abuse is that it's illogical - that you are praised or punished or criticized or loved not because of anything you did or didn't do, but because your parent (or whoever) is in a good or bad mood at that particular moment. I once had a boss like that, who would respond to the same exact piece of work totally differently depending on how he felt, and it made me crazy. I hated him, and I quit as soon as I could afford to; there was another person working there who had the opposite reaction, constantly trying to please him and feeling terrible about herself because she couldn't. It was a miserable workplace. Anyway: that kind of abuse doesn't usually wreck an adult's view of the world, but when you're 3 it totally prevents you from learning that your actions can affect what happens to you; that meta-cognition is actually worth doing.

And this is, in my view, one of the weak spots in the New Yorker article about self-control - and from what I can tell, in the underlying research. In order to control yourself, you have to think it's worth doing. Part of that comes from the home environment, of course. But my guess is that economic instability can also affect how you see delayed gratification. The researcher in the story, Walter Mischel, describes testing delayed gratification with kids in Trinidad by offering them a small chocolate bar now, or a much larger bar in a few days; later, he tested kids in Palo Alto by offering them one marshmallow now, or two marshmallows when he came back. It struck me that kids raised in an unstable economic situation might rationally believe that the much larger chocolate bar - or even the second marshmallow - would never materialize. And with a several day lag, they might be right. Maybe Walter Mischel would have a family emergency and have to return to the US; maybe the kid would for some reason not be able to come to school (or wherever the testing location was) that day. No chocolate for you! Inner-city schools are, honestly, often so disorganized that promised rewards and punishments never materialize; part of what the KIPP schools are doing is not just encouraging students to delay gratification, but establishing an environment in which it is rational to delay gratification because you will actually get the reward later on.

The same thing is true for high school students who can see cause and effect - they're not damaged that way - but don't rationally believe that they'll get the rewards of knuckling down and doing the schoolwork. And why should they? They are surrounded by people who have not been economically successful, and the people they know who are successful are not usually that way because of their academic success. Part of reconnecting that logic has to be making sure that it observably, demonstrably makes sense for a kid to delay gratification, to play by the rules, to work hard in order to get somewhere. The somewhere has to be there. That it's not, for some students, is the hard legacy of institutional racism, and the reason that teaching kids self-control - helpful as it is in a sane, well-organized school - isn't enough to create equality of opportunity.