Mar 01, 2014
ssc
Read on (unread)

Searching For One-Sided Tradeoffs

Scott Alexander discusses the concept of one-sided tradeoffs, using examples from college admissions to life hacks, and suggests ways to find opportunities for 'free' gains in various decisions. Longer summary
Scott Alexander explores the concept of one-sided tradeoffs using college admissions as a starting point. He explains how most decisions involve tradeoffs between different qualities, but suggests ways to find opportunities for 'free' gains. These include insider trading (having unique knowledge), bias compensation (exploiting others' biases), and comparative advantage (specializing in a specific area). He applies this framework to policy debates, life hacks, and personal decisions, arguing that understanding these concepts can help identify opportunities where one can gain benefits without significant downsides. The post concludes with examples like considering nootropics if one isn't afraid of taking drugs, or buying houses on streets with rude names for a discount. Shorter summary

Suppose you are an admissions official for a moderately prestigious college, which is neither the best nor the worst in your state. Your job is to look over people’s SAT scores, high school GPA, and essays on How I Overcame Adversity, and then decide whether or not to admit them to your college.

And suppose that you have a team of subordinates who make the really easy decisions for you. Auto-reject the losers who show up drunk to their interview and spell your institution’s name as “collej” on their applications, pass the rest on to you.

Your job probably doesn’t matter. Yes, there will be some very high quality candidates – the kids with straight As, perfect SATs, and stories about how they personally stopped the civil war in Lebanon despite being born without legs. But they will be using you as their safety school, and whether you accept them or not they will be going to Harvard and you will never see them. You will only be deciding among a small band of students – those too smart to get auto-rejected by your subordinates, but not smart enough to go to a school better than yours.

Given that kids who are good at everything and kids who are bad at everything are equally unlikely to be your target population, your job reduces to choosing what tradeoffs to take. Do you want kids with great SAT scores but terrible grades, kids with great grades but terrible SATs, or kids with mediocre grades and test scores alike? How about kids with terrible grades and terrible SATs, but they’re really really attractive and good at sports?

Even here your job won’t matter too much. Your counterparts at Harvard will presumably be smart people who have a pretty good idea of how important test scores and grades are in terms of the Intangible Qualities That Make You Good At College. If a new study comes out showing that SAT scores determine your future but grades are meaningless, that study will make you want to shift to a high-SAT-low-grade model, but it will equally increase the high-SAT-low-grade kids’ ability to get into Harvard, meaning that you will, to use an economics metaphor, have to buy SAT scores with grades at a lower exchange rate.

So basically no matter how competent you are as an admissions official, all of the kids entering your college will be about equally “good”.

There is a fun legend I heard in a stats class – I don’t know if it’s true – of a psychology professor who got very excited about her new theory that the brain traded off verbal and mathematical intelligence – being better at one made you worse at the other. She got SAT Math and SAT Verbal scores from her students and found it supported her theory. A friend of hers did a replication at his college and found support for the the theory there as well.

But larger scale testing disconfirmed the theory. What the professors working off college samples were finding was that all of the kids in their college were equally “good”, in a general sense, so excellence in any quality implied a tradeoff in other qualities. Suppose the professor worked at a mid-tier college – students with SATs much less than 1200 couldn’t get in; students with SATs much more than 1200 could and did go to better schools instead. Then all her students would have SATs around 1200. Which meant a student with an SAT Verbal of 700 would have an SAT Math of 500, a student with an SAT Math of 800 would have an SAT Verbal of 400, and boom, there’s your “trade-off of verbal and mathematical intelligence”. Obviously the tradeoff wouldn’t be perfect, since there’s random noise and since students are also trading off less obvious qualities like attractiveness, wealth, social skills, athleticism, musical talent, and diligence. But it would be more than enough for her to find her correlation if she was looking for it.

This suggests some odd strategies if we’re looking for particular college students. If we want to find the dumbest students in a particular college, we might look at the football star – not because football stars are naturally dumb, but because plausibly a student who couldn’t get in on his wits alone might make it in on the promise of helping the college team. If we want to find the smartest student in a particular college, we might look for someone on a scholarship – because perhaps she would otherwise be at Harvard, but was made less attractive to the Ivy League by her inability to pay them any money.

It also implies some weird strategies for admission officers. How do you maximize student quality when in theory all your job allows you to do is make tradeoffs between different subcharacteristics among students of the same quality? Aside from just hoping the occasional Harvard-caliber student accidentally stumbles into your office, I suggest three potential techniques: insider trading, bias compensation, and comparative advantage.

Insider trading is where you’re just plain smarter than everyone else. Maybe you’re a brilliant psychologist who has invented a test that invariably reveals students’ true potential. You can find kids with terrible grades and terrible SAT scores who will nevertheless shine. If you happen to luck into this position, you’ve got it made.

Bias-compensation is where you try to see if other colleges have biases that you can exploit. Sometimes this is simple and profitable. If Harvard is controlled by anti-Semites and auto-rejects all Jews, then you have a free shot to get Jews with 800 SAT Math, 800 SAT Verbal, and amazing football talent (though good luck finding Jews with amazing football talent). Once again, if you happen to luck into your competitors being stupid, you’ve got it made.

Sometimes it’s not that easy, and you have to kind of spin someone else’s preferences as “bias” when they might secretly have some wisdom behind them. For example, it is no doubt true that college admissions officials are influenced by student charm and social skills. So if you want, you can probably get smarter students if you go for the really really unpleasant students whom everyone dislikes as soon as they open their mouths. You can then declare “success” when your college gets a disproportionate number of academic awards, but unless you are a remarkably single-minded academic-award-maximizer, you may find that your college is kind of horrible now and other schools had pretty good reasons for rejecting these people.

Comparative advantage is where you decide you are going to have radically different priorities than anybody else. Maybe you want to be The Math School and become known for the quality of your math geniuses. So you nab all the students with 800 SAT Math and 400 SAT Verbal and then advertise the heck out of your students’ mathematical acumen. There’s also another sort of comparative advantage, where if you have a great sign language interpretation program and Harvard doesn’t, you can advertise to deaf kids who maybe Harvard doesn’t want because they can’t develop their talents effectively.

So let’s generalize from college to the sorts of choices that we actually face.

In one of the classics of the Less Wrong Sequences, Eliezer argues that policy debates should not appear one-sided. College students are pre-selected for “if they were worse they couldn’t get in, if they were better they’d get in somewhere else.” Political debates are pre-selected for “if it were a stupider idea no one would support it, if it were a better idea everyone would unanimously agree to do it.” We never debate legalizing murder, and we never debate banning glasses. The things we debate are pre-selected to be in a certain range of policy quality.

(to give three examples: no one debates banning sunglasses, that is obviously stupid. No one debates banning murder, that is so obviously a good idea that it encounters no objections. People do debate raising the minimum wage, because it has some plausible advantages and some plausible disadvantages. We might be able to squeeze one or two extra utils out of getting the minimum-wage question exactly right, but it’s unlikely to matter terribly much.)

So there’s some argument to be made that, like the admissions officer, our decisions aren’t too important. I don’t think things are quite that depressing. But, like the admissions officer, we will have to be clever if we want to figure out how to escape the seemingly iron law of tradeoffs.

I recently heard a Catholic guy condemn the “culture of death”, which by his telling consisted of abortion, stem cells, euthanasia, and capital punishment. I’m in favor of three of those things, and I avoid a perfect four-out-of-four only on a technicality: I can’t support capital punishment until it gets better at sparing the innocent and maybe becomes more cost-effective.

My near-unaninimous support for culture-of-death issues seems unlikely to be a coincidence, and indeed it isn’t. I have a deep philosophical disagreement with the Catholics here – they think life is a terminal value, I think life is only valuable insofar as it gives certain goods associated with living.

This means from my point of view, the Catholics have a bias in their trade-off arithmetic. They are the equivalent of the anti-Semitic Harvard leadership, who have given me this great gift of trade-off-free students. Just as learning the Harvard leadership is anti-Semitic makes me suddenly want to accept all Jews as a tradeoff-free utility gain, learning that a large portion of the electorate is biased against death means that certain death-related policies can be tradeoff-free utility gains to me.

I will add one more political example. I’ve previously proposed sticking lithium in the water supply as an intervention to promote psychiatric health. People are super creeped out by this – and in fact, so am I, a little bit. But this is encouraging! If people’s response was “actually, we have proof that these quantities of lithium hurt cardiac health” we’d be faced with a useless tradeoff – X psychiatric health against Y cardiac health – and so a policy we’d be squeezing a couple measly utils out of depending on which way the tradeoff went. But if their response is “I see no particular downside, but I am very creeped out by it”, then this is like learning Harvard is anti-Semitic – an explanation for why other people haven’t gobbled up a possible advantage, and a neon sign pointing out potential tradeoff-free gains for you.

We can also use this framework to evaluate life hacks.

Life hacks are touted as “little-known techniques you can use to improve your life”. There are two ways something can fail to be a life hack – either it becomes universally known, or it fails to improve anything. These form a pre-selection kinda like a college selecting students of a certain quality, or a country debating issues of a certain quality. If an intervention was obviously great, then either you’d already do it (think “sleeping at night” or “working at a job to earn money”) – or you would at least feel guilty for not doing so (think “diet and exercise”). If an intervention was useless, no one would call it a life hack (think “hitting yourself on the head with a baseball bat every day”). Life hacks are the things that are sort of in between, where there seem to be some benefits, and also some costs in terms of time and energy and money, and you’re not sure if they’re worth looking into or not.

If you want to do better than trade off your time and energy for the occasional small benefit, you need a theory of why that might be possible.

Every life hacker wants to be an insider trader – someone who is able to outperform competitors with more resources by being a little savvier about biology and psychology. And probably some are. But unless you are the first scientist to discover a new supplement, or the first psychologist to discover a new technique, your trades aren’t that insider and you’ll eventually have to explain why no one else has adopted them.

And most life hackers pay lip service to comparative advantage: “Everyone has their own individual biology and their own set of problems, what works for you may not work for everyone else.” This is pretty plausible. It suggests the reason the whole world isn’t adopting life hacks is because there’s a very high startup cost, where you’ve got to sort through a hundred different things and find the ones that work for you and the ones that don’t, and nobody can do this for you, and if you’re not very smart you’ll get it wrong.

Another form of comparative advantage is willpower. Maybe no one else is doing weight lifting because they don’t have the determination to go to the gym three times a week. This is a fine theory – plausible even – but it’s interesting to see how many of the people who confidently assert their own comparative advantage then buy a gym membership but end up not having the determination to go three times a week.

But in terms of using a tradeoff-based framework to help inform the decision of what lifehacks to try, it seems most promising to consider opportunities for bias compensation.

Like insider trading, bias compensation is claimed a lot more often than I think it can be supported. The polyphasic sleep crowd, for example, tell you that you can increase your free time per day – and all you need to do is stick to a very strict schedule, be very tired for a long time while you’re working out kinks, and abandon all hope of a social life or flexible schedule. To me this seems a lot like the admission official with the bright idea of admitting unpleasant low-social-skills kids: it sounds good if you’re only thinking about the most easily quantifiable results, but when you actually try it you tend to regret it very quickly.

Can we find anything more promising? I think that people are unnecessarily pessimistic about nootropics because they are scared of taking drugs. Fear of taking drugs is an excellent and rational fear to have, but if you happen to lack it that fear, or you have enough comparative advantage in pharmacological knowledge / research ability that you can justifiably be less afraid of taking drugs than everyone else, then this starts to look like the lithium-water example: getting free utility by abandoning your sense of creeped-out-ness.

But if you’re going to force me to give you an example of something I actually did differently because of thinking about tradeoffs, I’ll have to go with “try bacopa”.

Bacopa is a memory-enhancing drug that performs very well in studies. But it’s rarely used and it only got a middling ranking on my survey. I think this has something to do with having to take it for three months before it has any effect. Talk about trivial inconvenience. Most people don’t want to bother, so it remains largely uninvestigated, and the nonsuperabundance of bacopa use stands explained without resorting to it being a bad drug or having other tradeoffs we really don’t want. So using it – if you can stand the three month waiting period – has a higher-than-otherwise-expected likelihood of being free utility.

Me? I tried to start taking bacopa, but it gave me terrible diarrhea and I had to stop. Another tradeoff! That should just increase its expected psychological benefits!

Last, something on the lighter side: an article going around the Internet recently claims houses on streets with mildly rude names (example: “Slag Lane”) apparently cost £84,000 less than control houses on more properly named streets (the article does not give me enough information to rule out hypotheses like poor people being more willing to give their streets rude names). If you don’t care about what your street name is called, this might be another potential free trade-off – buy a house on Slag Lane and save $100,000+. Or buy a house that’s supposed to be haunted if you don’t believe in ghosts. Or buy a house near a prison with a very low escape rate because you trust the statistics and other people don’t.

If you enjoy this fan website, you can support us over here. Thanks a lot!
Loading...
Enjoying this website? You can donate to support it! You can also check out my Book Translator tool.