How to explore Scott Alexander's work and his 1500+ blog posts? This unaffiliated fan website lets you sort and search through the whole codex. Enjoy!

See also Top Posts and All Tags.

Minutes:
Blog:
Year:
Show all filters
20 posts found
Oct 03, 2022
acx
39 min 5,447 words 431 comments 67 likes podcast (42 min)
Scott Alexander explains and analyzes the debate between MIRI and CHAI on AI alignment strategies, focusing on the challenges and potential flaws in CHAI's 'assistance games' approach. Longer summary
This post discusses the debate between MIRI and CHAI regarding AI alignment strategies, focusing on CHAI's 'assistance games' approach and MIRI's critique of it. The author explains the concepts of sovereign and corrigible AI, inverse reinforcement learning, and the challenges in implementing these ideas in modern AI systems. The post concludes with a brief exchange between Eliezer Yudkowsky (MIRI) and Stuart Russell (CHAI), highlighting their differing perspectives on the feasibility and potential pitfalls of the assistance games approach. Shorter summary
Aug 04, 2022
acx
15 min 1,986 words 318 comments 89 likes podcast (16 min)
Scott Alexander examines the use of absurdity arguments, reflecting on his critique of Neom and offering strategies to balance absurdity heuristics with careful reasoning. Longer summary
Scott Alexander reflects on his previous post mocking the Neom project, considering whether his use of the absurdity heuristic was justified. He explores the challenges of relying on absurdity arguments, acknowledging that everything ultimately bottoms out in such arguments. The post discusses when it's appropriate to use absurdity heuristics in communication and personal reasoning, and offers strategies for avoiding absurdity bias. These include calibration training, social epistemology, occasional deep dives into fact-checking, and examining why beliefs come to our attention. Scott concludes that while there's no perfect solution, these approaches can help balance the use of absurdity arguments with more rigorous thinking. Shorter summary
Apr 04, 2022
acx
61 min 8,451 words 611 comments 83 likes podcast (63 min)
Scott Alexander summarizes a debate between Yudkowsky and Christiano on whether AI progress will be gradual or sudden, exploring their key arguments and implications. Longer summary
This post summarizes a debate between Eliezer Yudkowsky and Paul Christiano on AI takeoff speeds. Christiano argues for a gradual takeoff where AI capabilities increase smoothly, while Yudkowsky predicts a sudden, discontinuous jump to superintelligence. The post explores their key arguments, including historical analogies, the nature of intelligence and recursive self-improvement, and how to measure AI progress. It concludes that while forecasters slightly favor Christiano's view, both scenarios present significant risks that are worth preparing for. Shorter summary
Mar 04, 2022
acx
26 min 3,506 words 411 comments 153 likes podcast (24 min)
Scott Alexander examines various interpretations of rationality, concluding it might be best understood as 'the study of study' - a meta-level examination of truth-seeking methods. Longer summary
Scott Alexander explores different interpretations of rationality and the debate between rationalists and anti-rationalists. He examines rationality as full computation vs. heuristics, explicit computation vs. intuition, and Yudkowsky's definition of 'systematized winning'. The post concludes by suggesting rationality might be best understood as 'the study of study' - a meta-level examination of truth-seeking methods. This perspective explains why rationality is often associated with explicit calculation, despite the importance of intuition and heuristics in practical decision-making. The post argues that while intuitive methods may often be effective, the formal study of rationality allows for replication, scaling, and innovation in ways that intuition alone cannot. Shorter summary
Feb 23, 2022
acx
80 min 11,062 words 385 comments 121 likes podcast (71 min)
Scott Alexander reviews competing methodologies for predicting AI timelines, focusing on Ajeya Cotra's biological anchors approach and Eliezer Yudkowsky's critique. Longer summary
Scott Alexander reviews Ajeya Cotra's report on AI timelines for Open Philanthropy, which uses biological anchors to estimate when transformative AI might arrive, and Eliezer Yudkowsky's critique of this methodology. The post explains Cotra's approach, Yudkowsky's objections, and various responses, ultimately concluding that while the report may not significantly change existing beliefs, the debate highlights important considerations in AI forecasting. Shorter summary
Jan 19, 2022
acx
36 min 5,013 words 805 comments 103 likes podcast (37 min)
Scott Alexander reviews a dialogue between Yudkowsky and Ngo on AI alignment difficulty, exploring the challenges of creating safe superintelligent AI. Longer summary
This post reviews a dialogue between Eliezer Yudkowsky and Richard Ngo on AI alignment difficulty. Both accept that superintelligent AI is coming soon and could potentially destroy the world if not properly aligned. They discuss the feasibility of creating 'tool AIs' that can perform specific tasks without becoming dangerous agents. Yudkowsky argues that even seemingly safe AI designs could easily become dangerous agents, while Ngo is more optimistic about potential safeguards. The post also touches on how biological brains make decisions, and the author's thoughts on the conceptual nature of the discussion. Shorter summary
Aug 26, 2021
acx
45 min 6,219 words 575 comments 78 likes podcast (37 min)
Scott Alexander discusses and responds to comments on his article about the effects of missing school, exploring various perspectives and reflecting on education's value and impact. Longer summary
This post discusses the comments on Scott Alexander's previous article about the effects of missing school on children's education. It covers various perspectives, including personal anecdotes of people who missed school and succeeded, concerns about the impact on disadvantaged children, debates about the value of schooling beyond test scores, and Scott's reflections on the reactions to his original post. The author also shares his thoughts on the nature of education, forced activities for children, and the ethical implications of arguing for weaker positions while holding stronger views. Shorter summary
Jul 05, 2018
ssc
8 min 986 words 680 comments podcast (8 min)
Scott Alexander discusses how his blog contributes to developing rationality skills through analysis of complex issues and community discussion, despite not focusing directly on core rationality techniques. Longer summary
Scott Alexander reflects on the role of his blog in the rationalist community's development of rationality skills. He compares rationality to a martial art or craft, requiring both theory and practice. While acknowledging that his blog often focuses on controversial topics rather than core rationality techniques, he argues that analyzing complex, contentious issues serves as valuable practice for honing rationality skills. He suggests that through repeated engagement with difficult problems, readers can develop intuitions and refine their ability to apply rationality techniques. Scott emphasizes the importance of community discussion in this process, highlighting how reader comments contribute to his own learning and updating of beliefs. Shorter summary
Nov 30, 2017
ssc
61 min 8,532 words 479 comments podcast (60 min)
Scott Alexander reviews 'Inadequate Equilibria' by Eliezer Yudkowsky, a book exploring systemic failures, decision-making, and the limits of social consensus. Longer summary
Scott Alexander reviews Eliezer Yudkowsky's book 'Inadequate Equilibria', which explores why bad situations persist despite pressure to improve them, when to trust social consensus vs. personal reasoning, and the pitfalls of overusing the Outside View in decision-making. The book offers a framework for analyzing systemic failures and inefficiencies, but Scott finds its treatment of the Outside View somewhat disappointing. He recommends reading the book despite its flaws, citing its thought-provoking nature and useful conceptual toolbox. Shorter summary
Aug 02, 2017
ssc
19 min 2,607 words 275 comments
Scott Alexander explores theories to reconcile contradictory views on AI progress rates, considering the implications for AI development timelines and intelligence scaling. Longer summary
Scott Alexander discusses the apparent contradiction between Eliezer Yudkowsky's argument that AI progress will be rapid once it reaches human level, and Katja Grace's data showing gradual AI improvement across human-level tasks. He explores several theories to reconcile this, including mutational load, purpose-built hardware, varying sub-abilities, and the possibility that human intelligence variation is actually vast compared to other animals. The post ends by considering implications for AI development timelines and potential rapid scaling of intelligence. Shorter summary
Feb 08, 2016
ssc
14 min 1,919 words 572 comments
Scott Alexander shares a collection of mostly critical and often insulting testimonials about his blog Slate Star Codex, revealing the diverse and polarized reactions to his work. Longer summary
Scott Alexander shares a collection of testimonials and feedback he has received about his blog Slate Star Codex over three years. The post presents a wide range of opinions, many of which are highly critical, insulting, or dismissive. The feedback touches on various aspects of Scott's writing, personality, and the blog's community. Some comments praise his intelligence while criticizing his verbosity or political stance. Others mock his writing style, accuse him of censorship, or make personal attacks. The testimonials reveal the diverse and often polarized reactions to Scott's work, ranging from admiration to outright hostility. Shorter summary
Aug 20, 2015
ssc
34 min 4,625 words 703 comments
Scott Alexander discusses the problem of overconfidence in probability estimates, arguing that extreme certainty is rarely justified, especially for complex future predictions. Longer summary
Scott Alexander discusses the problem of overconfidence in probability estimates, particularly when people claim to be extremely certain about complex future events. He explains how experiments show that people are often vastly overconfident, even when they claim 99.9999% certainty. Scott argues that extreme confidence is rarely justified, especially for predictions about technological progress or societal changes. He suggests that overconfidence contributes to intolerance and close-mindedness, and that studying history can help reduce overconfidence by showing how often confident predictions have been wrong in the past. Shorter summary
Aug 04, 2015
ssc
78 min 10,781 words 679 comments
Scott Alexander defends Less Wrong and Eliezer Yudkowsky against accusations of being anti-scientific, arguing that developing rational thinking skills beyond traditional scientific methods is valuable and necessary. Longer summary
Scott Alexander responds to Topher Hallquist's criticism of Less Wrong and Eliezer Yudkowsky as being 'anti-scientific rationality'. Scott argues that Hallquist's criticisms are often unfair or inaccurate, taking quotes out of context or misunderstanding Yudkowsky's positions. He defends the rationalist community's efforts to develop better thinking tools that go beyond traditional scientific methods, while still respecting science. Scott contends that developing an 'Art of Thinking Clearly' is valuable and necessary, especially for experts who have to make difficult judgments. He argues Less Wrong is not against science, but wants to strengthen and supplement it with additional rational thinking skills. Shorter summary
Jul 23, 2015
ssc
20 min 2,739 words 391 comments
Scott Alexander explores the possibility of a 'General Factor of Correctness' and its implications for rationality and decision-making across various fields. Longer summary
Scott Alexander discusses the concept of a 'General Factor of Correctness', inspired by Eliezer Yudkowsky's essay on the 'Correct Contrarian Cluster'. He explores whether people who are correct about one controversial topic are more likely to be correct about others, beyond what we'd expect from chance. The post delves into the challenges of identifying such a factor, including separating it from expert consensus agreement, IQ, or education level. Scott examines studies on calibration and prediction accuracy, noting intriguing correlations between calibration skills and certain beliefs. He concludes by emphasizing the importance of this concept to the rationalist project, suggesting that if such a 'correctness skill' exists, cultivating it could be valuable for improving decision-making across various domains. Shorter summary
Nov 27, 2014
ssc
27 min 3,775 words 567 comments
Scott Alexander refutes a blog post criticizing rationalism, arguing it misunderstands the movement and its core values of empiricism, scholarship, and humility. Longer summary
Scott Alexander critiques a blog post titled 'Why I Am Not A Rationalist' on Almost Diamonds, arguing that it fundamentally misunderstands both classical rationalism (Descartes) and modern rationalism (Yudkowsky). He points out that the blog post accuses rationalists of lacking empiricism, scholarship, and humility, when these are in fact core values of the rationalist movement. Scott provides numerous examples to demonstrate the rationalist community's commitment to these principles. He concludes by explaining why rationality skills are necessary in addition to empirical knowledge, especially when dealing with limited or conflicting information. Shorter summary
May 06, 2014
ssc
2 min 213 words 21 comments
Scott Alexander promotes a charity event, encouraging donations to MIRI or Sankara Eye Foundation to help them win valuable services. Longer summary
Scott Alexander is promoting a charity event where the organization with the most unique donors of $10 or more by midnight wins valuable services. He encourages readers to donate to MIRI (Machine Intelligence Research Institute), which works on ensuring a positive singularity and friendly AI. MIRI is close to the top place with only a few hours left. Scott also mentions the option to donate to the current leader, Sankara Eye Foundation, which provides eye care for poor Indian children. Shorter summary
Mar 01, 2014
ssc
20 min 2,790 words 137 comments
Scott Alexander discusses the concept of one-sided tradeoffs, using examples from college admissions to life hacks, and suggests ways to find opportunities for 'free' gains in various decisions. Longer summary
Scott Alexander explores the concept of one-sided tradeoffs using college admissions as a starting point. He explains how most decisions involve tradeoffs between different qualities, but suggests ways to find opportunities for 'free' gains. These include insider trading (having unique knowledge), bias compensation (exploiting others' biases), and comparative advantage (specializing in a specific area). He applies this framework to policy debates, life hacks, and personal decisions, arguing that understanding these concepts can help identify opportunities where one can gain benefits without significant downsides. The post concludes with examples like considering nootropics if one isn't afraid of taking drugs, or buying houses on streets with rude names for a discount. Shorter summary
Dec 29, 2013
ssc
11 min 1,520 words 67 comments
Scott Alexander argues for a legitimate 'spirit of the First Amendment' that protects the marketplace of ideas, criticizing tactics that silence rather than address arguments. Longer summary
This post discusses the concept of 'spirit of the First Amendment' and its implications for free speech. Scott Alexander disagrees with Popehat's criticism of this concept, arguing that there is a legitimate meaning to it. He explains that the spirit of the First Amendment is about protecting the marketplace of ideas, where arguments succeed based on evidence rather than the power of their proponents. Scott distinguishes between good responses to arguments (addressing ideas) and bad responses (silencing them), including methods like getting people fired, doxxing, and online harassment. He argues that these silencing tactics distribute power based on popularity and wealth rather than the validity of ideas. The post concludes by stating that bad arguments should be met with counterarguments, not with tactics that silence or harm the speaker. Shorter summary
Jun 14, 2013
ssc
9 min 1,227 words 49 comments podcast (9 min)
Scott Alexander explores the ethical implications of publicly debating medical confidentiality breaches, introducing the concept of the 'Virtue of Silence' as a potential solution. Longer summary
Scott Alexander discusses the ethical dilemma of a doctor considering breaking medical confidentiality to free an innocent prisoner, as debated in the New York Times. He argues that the act of publicly discussing this dilemma in a major newspaper is more damaging to medical confidentiality than the actual breach would be. Scott introduces the concept of the 'Virtue of Silence,' suggesting that sometimes the most ethical action is to refrain from discussing certain issues publicly. He explores the challenges of practicing silence, the value of not contributing to viral debates, and the potential pitfalls of trying to enforce silence. The post ends with a nuanced view on when silence might be appropriate and a personal commitment to maintaining patient confidentiality. Shorter summary
May 24, 2013
ssc
22 min 2,997 words 48 comments
Scott Alexander bids farewell to California's Bay Area, praising its culture and the rationalist community while offering heartfelt tributes to friends who influenced him. Longer summary
Scott Alexander reflects on his time in California's Bay Area as he prepares to leave for a four-year residency in the Midwest. He expresses deep appreciation for the Bay Area's unique culture, particularly the rationalist community he was part of. Scott describes the community's ability to discuss complex topics openly, their approach to happiness and virtue, and their unique social dynamics. He then offers personal tributes to numerous friends and acquaintances who have impacted him, highlighting their individual qualities and contributions to his life and the community. Shorter summary