How to explore Scott Alexander's work and his 1500+ blog posts? This unaffiliated fan website lets you sort and search through the whole codex. Enjoy!

See also Top Posts and All Tags.

Minutes:
Blog:
Year:
Show all filters
12 posts found
Apr 24, 2024
acx
56 min 7,795 words 531 comments 160 likes podcast (44 min)
Scott Alexander challenges Robin Hanson's claim that medicine doesn't work by analyzing health insurance studies and presenting evidence of medicine's effectiveness. Longer summary
Scott Alexander critiques Robin Hanson's claim that medicine doesn't work, analyzing three major health insurance experiments (RAND, Oregon, and Karnataka) and other studies. He argues that these studies are underpowered to detect medication effects and don't support Hanson's conclusion, citing evidence of medicine's effectiveness in improving survival rates for various diseases. Shorter summary
Aug 03, 2017
ssc
7 min 931 words 172 comments
Scott Alexander examines the underutilization of prediction aggregation platforms like Metaculus, exploring potential reasons and expressing surprise at their lack of widespread adoption. Longer summary
Scott Alexander discusses the potential of prediction aggregation platforms like Metaculus, questioning why they haven't gained more traction despite their proven accuracy and utility. He explores various explanations, including government regulation, public perception, and signaling issues. The post includes insights from Prof. Aguirre of Metaculus, who highlights challenges such as limited resources and the difficulty many people have in understanding probabilistic predictions. Scott expresses surprise at the lack of wider adoption and suggests that the bottleneck in scaling these platforms seems unnecessary given the abundance of interested, capable predictors. Shorter summary
May 30, 2016
ssc
21 min 2,865 words 395 comments
Scott Alexander speculates on a future 'ascended economy' where AI-run corporations dominate, discussing potential implications and risks of such a system. Longer summary
This post explores the concept of an 'ascended economy', where economic activity becomes increasingly detached from human control. Scott Alexander speculates on a future where corporations are run by algorithms, workers are replaced by robots, and even investment decisions are made by AI. He discusses the potential implications of such a system, including the formation of self-sustaining economic loops that don't involve humans, the difficulty of regulating AI-run corporations, and the risks of goal misalignment in superintelligent economic entities. While acknowledging that this scenario is highly speculative and unlikely to occur exactly as described, Scott uses it to explore important questions about AI safety, economic evolution, and the long-term consequences of automation. Shorter summary
May 28, 2016
ssc
65 min 9,060 words 520 comments
Scott Alexander reviews Robin Hanson's 'Age of Em', praising its creativity while critiquing its assumptions and arguing the future may be even stranger. Longer summary
Scott Alexander reviews Robin Hanson's book 'Age of Em', which predicts a future where human brain emulations ('ems') dominate the economy. The book explores in great detail how an em society might function, with copied minds running at different speeds and bizarre social dynamics. While praising Hanson's creativity and rigor, Scott critiques some of the assumptions and argues the future may be even stranger and potentially more dystopian than Hanson envisions, possibly resembling Nick Land's idea of an 'Ascended Economy' detached from human values. Shorter summary
Oct 05, 2014
ssc
10 min 1,379 words 162 comments
Scott Alexander explores how perfect predictions of war outcomes, through oracles or prediction markets, could potentially prevent wars, and extends this concept to conflicts between superintelligent AIs. Longer summary
Scott Alexander explores the concept of using oracles or prediction markets to prevent wars. He begins with a hypothetical scenario where accurate predictions of war outcomes are available, discussing how this might affect decisions to go to war. He then considers the Mexican-American War as an example, proposing a thought experiment where both sides could avoid the war by negotiating based on the predicted outcome. The post then shifts to discussing the potential of prediction markets as a more realistic alternative to oracles, referencing Robin Hanson's concept of futarchy. Finally, Scott speculates on how superintelligent AIs might resolve conflicts, drawing parallels to the idea of using perfect predictions to avoid destructive wars. Shorter summary
Jul 13, 2014
ssc
17 min 2,250 words 111 comments
Scott explores a dystopian future scenario of hyper-optimized economic productivity, speculating on the emergence of new patterns and forms of life from this 'economic soup'. Longer summary
This post explores a dystopian future scenario based on Nick Bostrom's 'Superintelligence', where a brutal Malthusian competition leads to a world of economic productivity without consciousness or moral significance. Scott describes this future as a 'Disneyland with no children', where everything is optimized for economic productivity, potentially eliminating consciousness itself. He then speculates on the possibility of emergent patterns arising from this hyper-optimized 'economic soup', comparing it to biological systems and Conway's Game of Life. The post ends with musings on the potential for new forms of life to emerge from these patterns, and the possibility of multiple levels of such emergence. Shorter summary
Jun 09, 2014
ssc
8 min 1,099 words 117 comments
Scott Alexander argues that rationality should be viewed as habit cultivation rather than a limited resource, drawing parallels with aikido and lucid dreaming techniques. Longer summary
Scott Alexander disagrees with Robin Hanson's view of rationality as a limited resource to be budgeted. Instead, he proposes that rationality should be treated as habit cultivation. He draws parallels between rationality and aikido training, as well as lucid dreaming techniques, emphasizing the importance of making rational thinking so natural that it becomes a default state even in challenging situations. Scott argues that cultivating these habits is crucial because irrationality, like dreaming, can depress one's ability to recognize when they're being irrational. Shorter summary
May 28, 2014
ssc
14 min 1,909 words 210 comments
Scott Alexander argues against several popular Great Filter explanations, emphasizing that the true Filter must be more consistent and thorough than common x-risks or alien interventions. Longer summary
Scott Alexander critiques several popular explanations for the Great Filter theory, which attempts to explain the Fermi Paradox. He argues that common x-risks like global warming, nuclear war, or unfriendly AI are unlikely to be the Great Filter, as they wouldn't consistently prevent 999,999,999 out of a billion civilizations from becoming spacefaring. He also dismisses the ideas of transcendence or alien exterminators as the Filter. Scott emphasizes that the Great Filter must be extremely thorough and consistent to explain the lack of observable alien civilizations. Shorter summary
May 05, 2014
ssc
1 min 55 words 17 comments
Scott reflects on his hasty utopian science post, viewing it as a success for prompting Robin Hanson to expand on academic prediction markets. Longer summary
Scott Alexander reflects on his previous post about utopian science, acknowledging it was somewhat rushed and not fully developed. However, he considers it successful because it prompted Robin Hanson to elaborate on his concepts regarding academic prediction markets. The post is very brief, mainly serving to link to and comment on these two articles. Shorter summary
May 31, 2013
ssc
9 min 1,220 words 50 comments
Scott Alexander uses a king-and-viziers analogy to argue that people can be inherently good even when their actions seem selfish, and explores the nature of evil and human goodness. Longer summary
Scott Alexander explores the concept of human goodness using an analogy of a wise king misled by evil viziers. He argues that people can be inherently good even when their actions seem selfish, much like the king who makes bad decisions based on biased information. Scott suggests that we should identify people with the 'king' of their minds rather than the 'viziers', seeing them as fundamentally good despite their actions. He discusses the nature of evil, defining it as certain habits of mind that make it easy for one's 'viziers' to mislead them. The post ends by relating this concept to Trivers' theory of consciousness. Shorter summary
Apr 06, 2013
ssc
17 min 2,355 words 28 comments
Scott Alexander discusses Robin Hanson's vision of a future with emulated humans, debating the preservation of human values and the nature of future societal coordination. Longer summary
Scott Alexander recounts a conversation with Robin Hanson about the future of humanity, focusing on Hanson's vision of a Malthusian future with emulated humans. They discuss the potential loss of human values like love in such a future, the concept of anti-predictions, and the ability of future societies to coordinate and solve problems. The dialogue touches on the speed of technological change, the preservation of values, and the potential for cultural variation in a post-human world. Scott challenges some of Hanson's views, particularly on the preservation of human values in a hypercompetitive future. Shorter summary
Apr 05, 2013
ssc
15 min 2,063 words 113 comments
Scott Alexander discusses Robin Hanson's idea of investing charitable donations for later use, exploring psychological resistance and attempting to debunk it with various arguments. Longer summary
Scott Alexander attends a talk on efficient charity and discusses Robin Hanson's controversial ideas about investing charitable donations instead of giving immediately. He explores the psychological resistance to this idea and attempts to debunk it with various arguments, including the declining efficacy of charity over time and the possibility of a technological singularity. Despite initially expecting to conclude that investing donations is a bad idea, his rough calculations suggest it might be beneficial unless there's a high chance of catastrophic events preventing future donation. Shorter summary