Apr 06, 2013
ssc
Read on (unread)

Poor Folks Do Smile…For Now

Scott Alexander discusses Robin Hanson's vision of a future with emulated humans, debating the preservation of human values and the nature of future societal coordination. Longer summary
Scott Alexander recounts a conversation with Robin Hanson about the future of humanity, focusing on Hanson's vision of a Malthusian future with emulated humans. They discuss the potential loss of human values like love in such a future, the concept of anti-predictions, and the ability of future societies to coordinate and solve problems. The dialogue touches on the speed of technological change, the preservation of values, and the potential for cultural variation in a post-human world. Scott challenges some of Hanson's views, particularly on the preservation of human values in a hypercompetitive future. Shorter summary

I got the opportunity to talk to GMU professor and futurist Robin Hanson today, which I immediately seized upon to iron out the few disagreements I still have with someone so brilliant. The most pressing of these is his four year old post Poor Folks Do Smile, in which he envisions a grim Malthusian future of slavery and privation for humanity and then soundly endorses it. As he puts it:

Our robot descendants might actually be forced to slave near day and night, not to feed kids but just to pay for their body rent, their feed-stocks, their net connection, etc. Even so they’d be mostly happy.

Robin seems to be a total utilitarian who has no objections to the Repugnant Conclusion. It’s a consistent position on its own (though distasteful to me), and taken at face value it has something to recommend it. I’ve been to some horrible places like Haiti. The Haitians have it tough, but they still sing and dance, they still love each other, they still have hopes and dreams. If the far future is Haiti with better sanitation, it wouldn’t necessarily be the worst thing in the world.

But Robin has a slightly higher bar here. He believes that the near future promises advances in the uploading of human minds to computers, creating cyber-organisms he calls “ems” for “emulated humans”. Ems will have many advantages over biologicals – less need for space and resources, possible elimination of biological need for sleep, and can be copied-pasted at will. A future of zillions of Malthusian ems competing for hardware and computing power is a little different from zillions of biological humans competing for land and food.

So here is my dialogue with Robin as I remember it. I didn’t take notes, so it’s probably a bit off, and I’m rewriting me being confused and ummming and errrrring and meandering for a while as me having perfectly flowing rational arguments with carefully considered examples. I think I understood Robin well enough to be able to put down what he said without accidentally strawmanning him but I do notice he was much more convincing and I was much more confused and challenged in person than it looks in this transcript, so perhaps I failed. Nevertheless:

Scott: In “Poor Folks Do Smile”, you say that a future of intense competition and bare subsistence will be okay because we will still have the sorts of things that make life valuable. But in a future of ems, won’t there be competitive advantage to removing the things that make life valuable?

Robin: What do you mean?

Scott: Suppose you have some ems that are capable of falling in love and some that aren’t. The ones that fall in love spend some time swooning or writing poetry or talking to their lover or whatever, and the ones that don’t can keep working 24-7. Doesn’t that give the ones that can’t fall in love enough of a competitive advantage that the ones that can will be outcompeted and destroyed and eventually we’ll end up with only beings incapable of love?

Robin: You can’t just remove love from a human brain like that. There’s no one love module.

Scott: It’s probably very hard to remove love from a human brain without touching anything else. But given that the future is effectively infinitely long, and that in a world of perfect competition it would be advantageous to do this, surely someone will succeed eventually.

Robin: Yes, the future is infinitely long. But you’re speculating post-Singularity here, and the whole point of the Singularity is that it’s impossible to speculate on what will happen after it. I speculate on the near and medium term future, but trying to predict the very long-term future isn’t worth it.

Scott: I agree we can’t predict the far future, but this is less a prediction than an anti-prediction. An anti-prediction is…wait, am I doing that thing where I explain something you invented to you?

Robin: No, I didn’t invent anti-predictions. Go on.

Scott: An anti-prediction is…gah, I wish I could remember the canonical example…an anti-prediction is when you just avoid privileging a hypothesis and this sounds like a bold prediction. For example, suppose I predict with 99%+ confidence that the first alien species we meet will not be practicing Christians. In a certain context, this might sound overconfident – aliens could be atheists or Christians or Muslims, we don’t really know, but since I don’t know anything at all about aliens it sounds overconfident to be so sure it won’t be the Christian one. But in fact this is justified, since Christianity is just a tiny section of possible-religion-space that only seems important to us because we know about it. The aliens’ likelihood of being Christian isn’t 1/3 (“either Christian, or atheist, or Muslim) but more like 1/1 trillion (Christianity out of the space of all conceivably possible religions). The only way the aliens could be Christian is if it was for some reason correlated with our own civilization’s Christianity, like we went over there to convert them, or if Christianity was true and both us and the aliens were truth-seekers. My point is that human values, like love, are a tiny fraction of mindspace. So saying that the far future won’t have them is an antiprediction.

Robin: Values like love were selected by evolution. We can expect that similar selection pressures in the future will produce, if not the same values, ones that are similar enough to be recognizable.

Scott: The hypercompetitive marketplace of an advanced cybernetic civilization is different enough from an African savannah that I really don’t think that’s true. Love evolved in order to convince people to reproduce and raise children. If ems can reproduce by copy-pasting and end out with full adults, that’s not a society that will replicate the need for love.

Robin: Love is useful for a lot of other things. Probably the same mental circuitry that causes people to fall in love is the sort of thing you need to make people love their work and stay motivated.

Scott: Antiprediction! Most mind designs that can effectively perform tasks don’t need circuitry that also causes falling in love!

Robin: The trouble with this whole antiprediction concept is…so what if I told you that in the far future, people would travel much faster than light. Would that be an antiprediction? After all, most physical theories don’t include a hard light-speed limit.

Scott: The trouble with traveling faster than light is that it’s physically impossible. Are you trying to make the claim that a mind design that doesn’t include something like human love is physically impossible?

Robin: I’m trying to make the claim that it’s not something you can plausibly get to by modifying humans.

Scott: Fine. Forget modifying humans. People just try to build something new and more efficient from the ground up.

Robin: Maybe in the ridiculously far future…

Scott: But we both agree on a sort of singularitarian world-view where “history is speeding up”. The “ridiculously far future” could be twenty years from now if ten years from now they invent ems that can be run at a hundred times normal speed. If the ridiculously far future aka twenty years from now is one where human values like love are completely absent, that seems…really bad. And if we want to prevent it, it seems like that goes through trying to prevent a “merely” Malthusian medium-term future in which people are effective slaves but we haven’t quite figured out how to hack out love yet.

Robin: Attempting to influence the far future is very dangerous. In most cases we can’t predict the long-term consequences of our action. The near future will be in a much better position to influence the far future than we are. My claim, which you don’t seem to disagree with, is that the near future will be non-hellish and preserve human values like love. Let’s let this near future figure out whether the far future will be unacceptable. As time goes on, people gain better ability to coordinate, so the near future should be better at fixing our problems anyway.

Scott: As time goes on people gain better ability to coordinate?

Robin: Yes. In the old days, most decisions were made at the village or provincial level. Now we’re gradually centralizing decisions to the national and often even the supranational level. The modern world is much more effective at coordinating solutions to its problems than the past.

Scott glances at Michael Anissimov, probably the most vocal Reactionary in Berkeley, who has been standing there listening to the conversation. He looks skeptical.

Scott: But I know Michael over here has been writing a lot claiming the opposite. That the modern world is terrible at coordinate problems, especially compared to the past. I’m somewhat sympathetic to that argument. In the old days, a king could just declare we were going to do something and it got done. Now we have nightmarish failures of coordination, like the Obamacare bill where the leftists had a decent and coherent vision for how healthcare should work, the rightists had a reasonable and coherent vision for how healthcare should work, and we smashed them together until we got a Frankensteinian mashup of both visions that satisfied no one. Or how back in the old days, the Catholic Church pretty much controlled…

Robin: Kings and the Church were very good at acting, not at coordinating. They could enforce their choices, but those choices were often terrible and uncorrelated with what anyone else wanted. Modern institutions coordinate.

Michael: But modern coordination is just through increased bureaucracy.

Robin: Call it what you want, it’s still coordinating.

Michael: And the results are often terrible!

Robin: Yes, coordinating seems to divide into two subproblems. The first is getting everyone to agree on a solution. The second is making sure the solution is any good. I don’t claim we have solved the second subproblem, but we seem to be increasingly skilled at the first.

Michael: Really? Like the largest-scale world-coordinating organization we have right now seems to be the United Nations, and it’s famous for not getting anything done.

Robin: The thing with the UN is that at the beginning people expected it to be the umbrella organization under which all world affairs were conducted. But there are a host of other more or less associated organizations like the WTO that are actually doing a lot more.

Scott: You make an interesting case that future coordinating power will be better, but saying “let’s leave this to the future” only works if we know when the future is going to be and can prepare for it. In the case of what Eliezer calls a “foom” where an AI comes and causes a singularity almost out of nowhere – well, if we put off preparing for that for fifty years, and it happens in forty, that’s going to be really bad.

Robin: I think that scenario is very unlikely. In the scenario I believe in, an increase in technology led by emulated humans, change will occur on a predictable path. They will know if we’re on the path to eventual complete value deterioration.

Scott: That makes sense. So I guess that our real disagreement is only over the speed at which a singularity will happen, and whether we will know about it in time to protect our values.

Robin: Sort of. Although as I posted on my blog recently, I think “protecting values” is given too much importance as a concept. If any past civilization had succeeded in protecting its values, we’d be stuck with values that we would find horrible, mostly a mishmash of outdated and stupid norms about race and gender. So I say let future values drift by the same process our own values drifted. I don’t mind if future people have slightly weirder concepts of gender than I do.

Scott: I think that’s kind of unfair. You’re assuming the future will vary over certain dimensions where you find variation acceptable. But it might vary in much stranger and less desirable ways than that. Imagine an ancient Greek who said “I’m a cosmopolitan person…I don’t care whether the people of the future worship Zeus, or Poseidon, or even Apollo.” He doesn’t understand that the future also gets to vary in ways that are “outside his box”.

Robin: It’s possible. But like I said, I think we have a very long time before we have to worry about that. I would also suggest you look at the light speed limit. That means that there’s going to be inevitable “cultural variation” in the post-human world, since it will probably include a lot of semi-isolated star systems.

Scott: I still expect a lot of convergence. After all, if this is a hypercompetitive society, then they’ll be kind of forced into whatever social configuration leads to maximum military effectiveness or else be outcompeted by more militarily effective cultures.

Robin: No, not necessarily. There may be an advantage for the defender, such that it takes ten times the military might to attack as to defend. That would allow very large amounts of cultural deviation from the ideal military maximum.

After this the conversation moved on to other things and I don’t have as good a memory. But it was great to meet Robin in person and I highly recommend his blog to anyone with an interest in futurism or economics.

If you enjoy this fan website, you can support us over here. Thanks a lot!
Loading...
Enjoying this website? You can donate to support it! You can also check out my Book Translator tool.