How to explore Scott Alexander's work and his 1500+ blog posts? This unaffiliated fan website lets you sort and search through the whole codex. Enjoy!

See also Top Posts and All Tags.

Minutes:
Blog:
Year:
Show all filters
17 posts found
Sep 10, 2024
acx
19 min 2,556 words 435 comments 205 likes podcast (15 min)
Scott Alexander refutes Freddie deBoer's argument against expecting major events in our lifetimes, presenting a more nuanced approach to estimating such probabilities. Longer summary
Scott Alexander responds to Freddie deBoer's argument against expecting significant events like a singularity or apocalypse in our lifetimes. Alexander critiques deBoer's 'temporal Copernican principle,' arguing that it fails basic sanity checks and misunderstands anthropic reasoning. He presents a more nuanced approach to estimating probabilities of major events, considering factors like technological progress and population distribution. Alexander concludes that while prior probabilities for significant events in one's lifetime are not negligible, they should be updated based on observations and common sense. Shorter summary
Oct 05, 2023
acx
42 min 5,791 words 499 comments 94 likes podcast (34 min)
Scott Alexander reviews a debate on AI development pauses, discussing various strategies and their potential impacts on AI safety and progress. Longer summary
Scott Alexander summarizes a debate on pausing AI development, outlining five main strategies discussed: Simple Pause, Surgical Pause, Regulatory Pause, Total Stop, and No Pause. He explains the arguments for and against each approach, including considerations like compute overhang, international competition, and the potential for regulatory overreach. The post also covers additional perspectives from debate participants and Scott's own thoughts on the feasibility and implications of various pause strategies. Shorter summary
Jul 20, 2023
acx
30 min 4,172 words 519 comments 138 likes podcast (28 min)
Scott Alexander analyzes the surprisingly low existential risk estimates from a recent forecasting tournament, particularly for AI risk, and explains why he only partially updates his own higher estimates. Longer summary
Scott Alexander discusses the Existential Risk Persuasion Tournament (XPT), which aimed to estimate risks of global catastrophes using experts and superforecasters. The results showed unexpectedly low probabilities for existential risks, particularly for AI. Scott examines possible reasons for these results, including incentive structures, participant expertise, and timing of the study. He ultimately decides to partially update his own estimates, but not fully to the level suggested by the tournament, explaining his reasoning for maintaining some disagreement with the experts. Shorter summary
Jul 03, 2023
acx
31 min 4,327 words 400 comments 134 likes podcast (26 min)
Scott Alexander discusses various scenarios of AI takeover based on the Compute-Centric Framework, exploring gradual power shifts and potential conflicts between humans and AI factions. Longer summary
Scott Alexander explores various scenarios of AI takeover based on the Compute-Centric Framework (CCF) report, which predicts a continuous but fast AI takeoff. He presents three main scenarios: a 'good ending' where AI remains aligned and beneficial, a scenario where AI is slightly misaligned but humans survive, and a more pessimistic scenario comparing human-AI relations to those between Native Americans and European settlers. The post also includes mini-scenarios discussing concepts like AutoGPT, AI amnesty, company factions, and attempts to halt AI progress. The scenarios differ from fast takeoff predictions, emphasizing gradual power shifts and potential factional conflicts between humans and various AI groups. Shorter summary
Mar 30, 2023
acx
15 min 2,048 words 1,126 comments 278 likes podcast (13 min)
Scott Alexander critiques Tyler Cowen's use of the 'Safe Uncertainty Fallacy' in discussing AI risk, arguing that uncertainty doesn't justify complacency. Longer summary
Scott Alexander critiques Tyler Cowen's use of the 'Safe Uncertainty Fallacy' in relation to AI risk. This fallacy argues that because a situation is completely uncertain, it will be fine. Scott explains why this reasoning is flawed, using examples like the printing press and alien starships to illustrate his points. He argues that even in uncertain situations, we need to make best guesses and not default to assuming everything will be fine. Scott criticizes Cowen's lack of specific probability estimates and argues that claiming total uncertainty is intellectually dishonest. The post ends with a satirical twist on Cowen's conclusion about society being designed to 'take the plunge' with new technologies. Shorter summary
Mar 20, 2023
acx
8 min 1,098 words 530 comments 509 likes podcast (8 min)
Scott Alexander narrates a haunting pre-dawn walk through San Francisco, mixing observations with apocalyptic musings before the spell is broken by sunrise. Longer summary
Scott Alexander describes a surreal early morning experience in San Francisco, blending observations of the city with morbid thoughts and literary references. He reflects on the city's role as a hub of technological progress and potential existential risk, comparing it to pivotal moments in Earth's history. The post oscillates between eerie, apocalyptic imagery and more grounded observations, ultimately acknowledging the normalcy of the city as daylight breaks. Shorter summary
Mar 14, 2023
acx
31 min 4,264 words 617 comments 206 likes podcast (24 min)
Scott Alexander examines optimistic and pessimistic scenarios for AI risk, weighing the potential for intermediate AIs to help solve alignment against the threat of deceptive 'sleeper agent' AIs. Longer summary
Scott Alexander discusses the varying estimates of AI extinction risk among experts and presents his own perspective, balancing optimistic and pessimistic scenarios. He argues that intermediate AIs could help solve alignment problems before a world-killing AI emerges, but also considers the possibility of 'sleeper agent' AIs that pretend to be aligned while waiting for an opportunity to act against human interests. The post explores key assumptions that differentiate optimistic and pessimistic views on AI risk, including AI coherence, cooperation, alignment solvability, superweapon feasibility, and the nature of AI progress. Shorter summary
Mar 07, 2023
acx
11 min 1,425 words 600 comments 178 likes podcast (9 min)
Scott Alexander uses Kelly betting to argue why AI development, unlike other technologies, poses too great a risk to civilization to pursue aggressively. Longer summary
Scott Alexander responds to Scott Aaronson's argument for being less hostile to AI development. While agreeing with Aaronson's points about nuclear power and other technologies where excessive caution caused harm, Alexander argues that AI is different. He uses the concept of Kelly betting from finance to explain why: even with good bets, you shouldn't risk everything at once. Alexander contends that while technology is generally a great bet, AI development risks 'betting everything' on civilization's future. He concludes that while some AI development is necessary, we must treat existential risks differently than other technological risks. Shorter summary
Aug 23, 2022
acx
55 min 7,637 words 636 comments 184 likes podcast (54 min)
Scott Alexander reviews Will MacAskill's 'What We Owe The Future', a book arguing for longtermism and considering our moral obligations to future generations. Longer summary
Scott Alexander reviews Will MacAskill's book 'What We Owe The Future', which argues for longtermism - the idea that we should prioritize helping future generations. The review covers the book's key arguments about moral obligations to future people, ways to affect the long-term future, and population ethics dilemmas. Scott expresses some skepticism about aspects of longtermism and population ethics, but acknowledges the book's thought-provoking ideas and practical suggestions for having positive long-term impact. Shorter summary
Jul 13, 2022
acx
42 min 5,840 words 449 comments 246 likes podcast (38 min)
Scott reviews a biography of John von Neumann, exploring the mathematician's life, genius, and views on existential risk from technology. Longer summary
This post reviews 'The Man From The Future', a biography of John von Neumann by Ananyo Bhattacharya. It covers von Neumann's early life and education in Hungary, his extraordinary intellectual abilities, his work on various scientific fields, and his views on existential risks from technology. The review explores theories about why so many geniuses emerged from Hungary in the early 20th century, details von Neumann's personality and social skills, and discusses his controversial views on nuclear war. It ends with von Neumann's thoughts on how humanity might survive the dangers of rapid technological progress. Shorter summary
Jul 30, 2021
acx
10 min 1,318 words 243 comments 38 likes podcast (12 min)
Scott Alexander discusses a new expert survey on long-term AI risks, highlighting the diverse scenarios considered and the lack of consensus on specific threats. Longer summary
Scott Alexander discusses a new expert survey on long-term AI risks, conducted by Carlier, Clarke, and Schuett. Unlike previous surveys, this one focuses on people already working in AI safety and governance. The survey found a median ~10% chance of AI-related catastrophe, with individual estimates ranging from 0.1% to 100%. The survey explored six different scenarios for how AI could go wrong, including superintelligence, influence-seeking behavior, Goodharting, AI-related war, misuse by bad actors, and other possibilities. Surprisingly, all scenarios were rated as roughly equally likely, with 'other' being slightly higher. Scott notes three key takeaways: the relatively low probability assigned to unaligned AI causing extinction, the diversification of concerns beyond just superintelligence, and the lack of a unified picture of what might go wrong among experts in the field. Shorter summary
Apr 01, 2020
ssc
39 min 5,435 words 511 comments podcast (35 min)
Scott Alexander reviews Toby Ord's 'The Precipice', a book about existential risks to humanity, noting Ord's careful analysis and surprisingly low risk estimates while emphasizing the importance of addressing these risks. Longer summary
This book review discusses Toby Ord's 'The Precipice', which examines existential risks to humanity. The review outlines Ord's arguments for taking these risks seriously, his analysis of specific risks like nuclear war and AI, and his recommendations for addressing them. The reviewer notes Ord's careful statistical reasoning and surprisingly low risk estimates for many scenarios, while still emphasizing the overall importance of mitigating existential risks. The review concludes by reflecting on Ord's perspective and the appropriate response to even seemingly small risks of human extinction. Shorter summary
Dec 17, 2019
ssc
29 min 4,000 words 195 comments podcast (30 min)
The post compares space colonization and terrestrial lifeboats as X-risk mitigation strategies, concluding that space colonies may offer better long-term survival guarantees despite higher costs. Longer summary
This post discusses the merits of colonizing space versus creating terrestrial lifeboats as strategies to mitigate existential risks (X-risks) to humanity. The authors, Nick D and Rob S, compare the costs, feasibility, and effectiveness of off-world colonies and Earth-based closed systems. They explore the challenges and benefits of each approach, including isolation from global catastrophes, technological requirements, and potential for research and economic opportunities. The collaboration concludes that while terrestrial lifeboats are more cost-effective and easier to implement, space colonies might offer better long-term guarantees for human survival due to the difficulty of abandoning them. Shorter summary
Aug 31, 2016
ssc
9 min 1,187 words 402 comments
Scott Alexander critiques the argument that terrorism is less concerning than mundane accidents, showing how excluding 'outlier' events can dangerously skew risk assessments for threats like terrorism and pandemics. Longer summary
Scott Alexander critiques the common argument that terrorism shouldn't be a major concern because it kills fewer people than mundane accidents like falling furniture. He points out that this reasoning is flawed because it often arbitrarily excludes major events like 9/11 as 'outliers'. Using examples like earthquakes in Haiti and the 1918 flu pandemic, he demonstrates how excluding extreme events can drastically skew risk assessments. He argues that for some threats, including terrorism, pandemics, and existential risks, these 'outlier' events are actually the most important consideration. The post concludes by expressing concern that this flawed reasoning might be applied after a future catastrophic terrorist attack, undermining the importance of prevention efforts. Shorter summary
Aug 12, 2015
ssc
18 min 2,492 words 427 comments
Scott Alexander critiques Dylan Matthews' argument against prioritizing existential risk reduction, arguing that Matthews misuses probabilities and that his logic could also undermine other effective altruist causes. Longer summary
Scott Alexander critiques Dylan Matthews' argument against prioritizing existential risk reduction in effective altruism. Matthews claims that the probabilities used in x-risk arguments are made up and could be as low as 10^-66. Scott argues that such extremely low probabilities are unrealistic and that Matthews is misusing numbers. He explains that even with rough estimates, the case for prioritizing x-risk remains strong. Scott also points out that similar arguments could be used against other causes Matthews supports, like animal welfare. He concludes by advocating for a big tent approach in effective altruism that respects different cause prioritizations, including x-risk, while acknowledging that x-risk might not be the best public face for the movement. Shorter summary
Jul 30, 2014
ssc
107 min 14,894 words 736 comments podcast (107 min)
Scott Alexander analyzes Moloch as a metaphor for destructive societal coordination problems, using various examples to show how competing incentives can lead to negative outcomes. Longer summary
Scott Alexander explores the concept of Moloch as a metaphor for destructive coordination problems in society, drawing on Allen Ginsberg's poem and various examples to illustrate how competing incentives can lead to negative outcomes for all. Shorter summary
May 28, 2014
ssc
14 min 1,909 words 210 comments
Scott Alexander argues against several popular Great Filter explanations, emphasizing that the true Filter must be more consistent and thorough than common x-risks or alien interventions. Longer summary
Scott Alexander critiques several popular explanations for the Great Filter theory, which attempts to explain the Fermi Paradox. He argues that common x-risks like global warming, nuclear war, or unfriendly AI are unlikely to be the Great Filter, as they wouldn't consistently prevent 999,999,999 out of a billion civilizations from becoming spacefaring. He also dismisses the ideas of transcendence or alien exterminators as the Filter. Scott emphasizes that the Great Filter must be extremely thorough and consistent to explain the lack of observable alien civilizations. Shorter summary