How to explore Scott Alexander's work and his 1500+ blog posts? This unaffiliated fan website lets you sort and search through the whole codex. Enjoy!

See also Top Posts and All Tags.

Minutes:
Blog:
Year:
Show all filters
5 posts found
Oct 17, 2024
acx
37 min 5,159 words 647 comments 221 likes podcast (30 min)
Scott Alexander reviews Nick Bostrom's 'Deep Utopia', which explores the concept of a technologically perfect utopia and discusses how to maintain meaning and purpose in such a world. Longer summary
Scott Alexander reviews Nick Bostrom's book 'Deep Utopia', which explores the concept of a technologically advanced utopia where all problems are solved and people can do whatever they want. The book discusses whether such a utopia would be fulfilling or boring, and proposes various solutions to maintain meaning and purpose in such a world. Scott analyzes Bostrom's ideas, critiques some aspects, and expands on the concept, considering additional implications and scenarios not fully explored in the book. Shorter summary
Sep 03, 2024
acx
18 min 2,459 words 318 comments 525 likes podcast (15 min)
Scott Alexander presents a series of satirical job interviews at Thiel Capital, where candidates share increasingly absurd unpopular beliefs, highlighting the nature of conspiracy theories and contrarian thinking. Longer summary
This post is a satirical piece featuring a series of fictional job interviews at Thiel Capital. Each interview involves asking candidates to share an unpopular belief they hold. The responses range from absurd conspiracy theories to unconventional interpretations of historical events and scientific concepts. The interviewers' reactions highlight the absurdity of the candidates' beliefs, while also poking fun at the idea of 'based' or controversial opinions in tech and finance circles. The piece uses humor to explore themes of conspiracy theories, contrarian thinking, and the nature of unconventional beliefs. Shorter summary
Aug 21, 2019
ssc
5 min 678 words 220 comments podcast (6 min)
Scott Alexander argues against the fear of angering simulators by testing if we're in a simulation, stating that competent simulators would prevent discovery or expect such tests as part of civilizational development. Longer summary
Scott Alexander critiques a New York Times article suggesting we should avoid testing whether we live in a simulation to prevent potential destruction by the simulators. He argues that this concern is unfounded for several reasons: 1) Any sufficiently advanced simulators would likely monitor their simulations closely and could easily prevent us from discovering our simulated nature. 2) Given the scale of simulations implied by the simulation hypothesis, our universe is likely not the first to consider such tests, and simulators would have contingencies in place. 3) Grappling with simulation-related philosophy is probably a natural part of civilizational development that simulators would expect and allow. While computational intensity might be a more valid concern, Scott suggests it's not something we need to worry about currently. Shorter summary
Apr 01, 2018
ssc
20 min 2,790 words 332 comments podcast (21 min)
Scott Alexander speculates on how concepts from decision theory and AI could lead to the emergence of a God-like entity across the multiverse, which judges and potentially rewards human behavior. Longer summary
Scott Alexander explores a speculative theory about the nature of God and morality, combining concepts from decision theory, AI safety, and multiverse theory. He proposes that superintelligences across different universes might engage in acausal trade and value handshakes, eventually forming a pact that results in a single superentity identical to the moral law. This entity would span all possible universes, care about mortal beings, and potentially reward or punish them based on their adherence to moral behavior. The post connects these ideas to traditional religious concepts of an all-powerful, all-knowing God who judges human actions. Shorter summary
Aug 23, 2016
ssc
5 min 603 words 353 comments
Scott explores the implications of Sean Carroll's argument against the simulation hypothesis, suggesting that our inability to explain consciousness might indicate we're in a 'ground-level' simulation. Longer summary
Scott Alexander discusses Sean Carroll's argument against the simulation hypothesis, exploring the implications if Carroll's reasoning is correct. He posits that a 'ground-level' universe, incapable of simulating other universes, would have to be strange, potentially banning Turing machines while still allowing for conscious observers. Scott then considers a version of anthropics conditioned on consciousness, suggesting that in a ground-level simulation, consciousness would remain inexplicable to its inhabitants despite their ability to understand all other aspects of their universe. He concludes that if Carroll's deconstruction is correct, our difficulty in explaining consciousness might indicate we're in a ground-level simulation. Shorter summary