How to avoid getting lost reading Scott Alexander and his 1500+ blog posts? This unaffiliated fan website lets you sort and search through the whole codex. Enjoy!

See also Top Posts and All Tags.

Minutes:
Blog:
Year:
Show all filters
28 posts found
Sep 10, 2024
acx
20 min 2,556 words Comments pending
Scott Alexander refutes Freddie deBoer's argument against expecting major events in our lifetimes, presenting a more nuanced approach to estimating such probabilities. Longer summary
Scott Alexander responds to Freddie deBoer's argument against expecting significant events like a singularity or apocalypse in our lifetimes. Alexander critiques deBoer's 'temporal Copernican principle,' arguing that it fails basic sanity checks and misunderstands anthropic reasoning. He presents a more nuanced approach to estimating probabilities of major events, considering factors like technological progress and population distribution. Alexander concludes that while prior probabilities for significant events in one's lifetime are not negligible, they should be updated based on observations and common sense. Shorter summary
Feb 14, 2023
acx
27 min 3,481 words 819 comments 387 likes podcast
Scott Alexander defends his thorough analysis of ivermectin studies, arguing that dismissing controversial topics without addressing evidence can inadvertently promote conspiracy theories. Longer summary
Scott Alexander responds to criticism from Chris Kavanagh about his lengthy analysis of ivermectin studies. He argues that dismissing controversial topics without addressing evidence can push people toward conspiracy theories. Scott shares his personal experience with Atlantis conspiracy theories as a teenager, emphasizing the importance of providing rational explanations rather than mockery. He critiques Kavanagh's apparent stance against examining evidence, likening it to religious fideism. Scott defends the value of practicing critical thinking and evidence evaluation, even on settled issues, to build skills for harder cases. He argues that conspiracy theorists use similar reasoning processes to everyone else, just with more biases, and that understanding these processes is crucial for effective communication and prevention of misinformation. Shorter summary
Feb 01, 2023
acx
104 min 13,427 words 315 comments 105 likes podcast
Scott Alexander responds to critiques of his 2021 ivermectin analysis, acknowledging some errors but maintaining his conclusion that ivermectin likely doesn't work for COVID-19. Longer summary
Scott Alexander responds to Alexandros Marinos' critique of his 2021 post on ivermectin studies, addressing points about individual studies, meta-analysis methods, publication bias, and new evidence since 2021. He acknowledges some mistakes in his original analysis but maintains his overall conclusion that ivermectin is likely ineffective for COVID-19. Shorter summary
Nov 11, 2022
acx
45 min 5,820 words 733 comments 171 likes podcast
Scott Alexander argues for generally believing reports of unusual mental experiences, countering skepticism about unfalsifiable internal states. Longer summary
Scott Alexander responds to Resident Contrarian's skepticism about unfalsifiable internal states like jhana meditation experiences. He argues that we should generally believe people's reports of unusual mental experiences, based on examples like visual imagination, synaesthesia, and phantom limb pain. Scott critiques RC's arguments about 'spoonies' and dissociative identity disorder, providing a more nuanced view of these conditions. He emphasizes the importance of considering evidence and priors rather than relying solely on intuition about whether people are lying. Shorter summary
Jun 10, 2022
acx
35 min 4,463 words 497 comments 107 likes podcast
Scott Alexander argues against Gary Marcus's critique of AI scaling, discussing the potential for future AI capabilities and the nature of human intelligence. Longer summary
Scott Alexander responds to Gary Marcus's critique of AI scaling, arguing that current AI limitations don't necessarily prove statistical AI is a dead end. He discusses the scaling hypothesis, compares AI development to human cognitive development, and suggests that 'world-modeling' may emerge from pattern-matching abilities rather than being a distinct, hard-coded function. Alexander also considers the potential capabilities of future AI systems, even if they don't achieve human-like general intelligence. Shorter summary
May 16, 2022
acx
20 min 2,513 words 359 comments 94 likes podcast
Scott critiques evolutionary explanations for suitor-parent disagreements in mate choice, proposing that suitors use innate instincts while parents rely on reasoning, leading to different preferences. Longer summary
Scott Alexander critiques Dynomight's theories on why suitors and parents disagree about mate choice. He argues that evolutionary psychology explanations are insufficient, proposing instead that suitors rely on innate, finely-tuned instincts for mate selection, while parents use less-evolved human reasoning. This difference in decision-making processes leads to systematically different preferences. Scott also explores the complexity of human drives related to reproduction, questioning whether they exist at different cognitive 'levels' (reptilian, mammalian, human) and how they interact. Shorter summary
Apr 22, 2022
acx
30 min 3,879 words 160 comments 84 likes podcast
Scott Alexander critiques Ben Hoffman's arguments about Vitamin D dosing, maintaining that it is primarily a bone-related chemical with limited evidence for other benefits. Longer summary
Scott Alexander responds to Ben Hoffman's critique of his views on Vitamin D dosing. He argues that ancestral populations likely received much less Vitamin D from sunlight than Hoffman suggests, and that the doses used in most studies are appropriate. Scott reviews the literature on Vitamin D dosing, discusses various recommendations and debates within the medical community, and explains why he remains skeptical of claims about Vitamin D's non-skeletal benefits, including for COVID-19 treatment. Shorter summary
Apr 04, 2022
acx
66 min 8,451 words 611 comments 83 likes podcast
Scott Alexander summarizes a debate between Yudkowsky and Christiano on whether AI progress will be gradual or sudden, exploring their key arguments and implications. Longer summary
This post summarizes a debate between Eliezer Yudkowsky and Paul Christiano on AI takeoff speeds. Christiano argues for a gradual takeoff where AI capabilities increase smoothly, while Yudkowsky predicts a sudden, discontinuous jump to superintelligence. The post explores their key arguments, including historical analogies, the nature of intelligence and recursive self-improvement, and how to measure AI progress. It concludes that while forecasters slightly favor Christiano's view, both scenarios present significant risks that are worth preparing for. Shorter summary
Mar 22, 2022
acx
19 min 2,418 words 623 comments 149 likes podcast
Scott Alexander argues against Erik Hoel's claim that the decline of 'aristocratic tutoring' explains the perceived lack of modern geniuses, offering alternative explanations and counterexamples. Longer summary
Scott Alexander critiques Erik Hoel's essay on the decline of geniuses, which attributes this decline to the loss of 'aristocratic tutoring'. Scott argues that this explanation is insufficient, providing counterexamples of historical geniuses who weren't aristocratically tutored. He also points out that fields like music, where such tutoring is still common, still experience a perceived decline in genius. Scott proposes alternative explanations for the apparent lack of modern geniuses, including the increasing difficulty of finding new ideas, the distribution of progress across more researchers, and changing social norms around celebrating individual brilliance. He suggests that newer, smaller fields like AI and AI alignment still produce recognizable geniuses, supporting his view that the apparent decline is more about the maturity and size of fields than about educational methods. Shorter summary
Jan 19, 2022
acx
39 min 5,013 words 805 comments 103 likes podcast
Scott Alexander reviews a dialogue between Yudkowsky and Ngo on AI alignment difficulty, exploring the challenges of creating safe superintelligent AI. Longer summary
This post reviews a dialogue between Eliezer Yudkowsky and Richard Ngo on AI alignment difficulty. Both accept that superintelligent AI is coming soon and could potentially destroy the world if not properly aligned. They discuss the feasibility of creating 'tool AIs' that can perform specific tasks without becoming dangerous agents. Yudkowsky argues that even seemingly safe AI designs could easily become dangerous agents, while Ngo is more optimistic about potential safeguards. The post also touches on how biological brains make decisions, and the author's thoughts on the conceptual nature of the discussion. Shorter summary
Aug 12, 2021
acx
25 min 3,131 words 740 comments 99 likes podcast
Scott Alexander challenges Richard Hanania's explanation for liberal dominance in institutions, attributing it instead to shifting coalition systems described by Thomas Piketty. Longer summary
Scott Alexander responds to Richard Hanania's article asking why everything is liberal despite roughly equal numbers of conservative and liberal voters. Alexander argues that the reason is not, as Hanania suggests, that liberals care more about politics, but rather due to shifting coalition systems as described by Thomas Piketty. Piketty's research shows a change from a 1950s system of elite vs. common parties to a current system where the left captures highly educated voters while the right captures less educated and some wealthy voters. This shift explains why institutions dominated by highly educated people lean liberal. Alexander discusses the implications of this shift, including potential instability in the system and the risk of institutional monocultures. He suggests potential solutions like decreasing the importance of college degrees in society and solving racism to shake up political coalitions. Shorter summary
Aug 08, 2021
acx
28 min 3,628 words 276 comments 114 likes podcast
Scott Alexander defends his criticism of the FDA's approval process in the infant fish oil case, arguing that systemic issues cause harmful delays even when the FDA follows its mandate. Longer summary
Scott Alexander responds to Kevin Drum's criticism of his interpretation of the infant fish oil story. He maintains that his account was substantially correct, despite some minor errors. Scott argues that the FDA's approval process, while following its mandate, causes unnecessary delays in life-saving treatments. He uses analogies to illustrate how the FDA's structure can be problematic even when individual employees perform well. Scott emphasizes that his criticism is not about the FDA failing its mandate, but about the design of the system itself causing delays in implementing known beneficial treatments. He concludes by addressing Drum's skepticism of FDA critics, arguing that anger towards the FDA often comes from personal experiences with its shortcomings. Shorter summary
Jul 27, 2021
acx
18 min 2,322 words 441 comments 126 likes podcast
Scott Alexander critiques Daron Acemoglu's Washington Post article on AI risks, highlighting flawed logic and unsupported claims about AI's current impacts. Longer summary
Scott Alexander critiques an article by Daron Acemoglu in the Washington Post about AI risks. He identifies the main flaw as Acemoglu's argument that because AI is dangerous now, it can't be dangerous in the future. Scott argues this logic is flawed and that present and future AI risks are not mutually exclusive. He also criticizes Acemoglu's claims about AI's current negative impacts, particularly on employment, as not well-supported by evidence. Scott discusses the challenges of evaluating new technologies' impacts and argues that superintelligent AI poses unique risks different from narrow AI. He concludes by criticizing the tendency of respected figures to dismiss AI risk concerns without proper engagement with the arguments. Shorter summary
Jun 14, 2021
acx
30 min 3,898 words 702 comments 197 likes podcast
Scott Alexander argues that Jewish overachievement is real and deserves continued study, countering Noah Smith's attempt to downplay its significance. Longer summary
Scott Alexander responds to Noah Smith's article questioning whether Jews are really disproportionately successful. Scott argues that Jewish success is real and not fully explained by selective immigration or other factors Noah proposed. He examines historical evidence on Jewish immigration, compares Jewish achievement to urbanization rates, and discusses data on Jewish success in various fields. Scott concludes that Jewish overachievement remains an interesting and important phenomenon to study, potentially offering insights into genetics or cultural factors that could be broadly beneficial if understood. Shorter summary
Jan 15, 2020
ssc
33 min 4,232 words 458 comments podcast
Scott Alexander critiques Bryan Caplan's constraints vs preferences model of mental illness, proposing instead a goals vs urges framework that better explains both mental and physical health issues. Longer summary
Scott Alexander responds to Bryan Caplan's critique of psychiatry, focusing on Caplan's distinction between constraints and preferences in mental illness. Scott argues that this model is flawed and doesn't accurately represent mental or even many physical illnesses. He proposes a more nuanced model based on goals (endorsed preferences) and urges (unendorsed preferences), using examples to show how this better explains behavior in both mental and physical health contexts. Scott concludes that this model allows for a more libertarian approach, supporting individuals in achieving their goals, whether through addressing constraints or managing urges. Shorter summary
Jun 19, 2018
ssc
10 min 1,273 words 412 comments podcast
Scott Alexander argues that public outrage over specific misdeeds is not arbitrary, but a strategic way to enforce important social norms with limited resources. Longer summary
Scott Alexander responds to Bryan Caplan's article about the arbitrariness of public outrage, proposing a different theory. He argues that people get upset over violations of established norms because it's an efficient way to use limited enforcement resources. Scott uses examples of police prioritizing certain crimes and the international response to chemical weapons to illustrate his point. He extends this reasoning to explain public outrage over sexual harassment and suggests that enforcing taboos against clearly defined bad behaviors can be more effective than trying to prevent all forms of misconduct. The post concludes by applying this logic to the case of China's treatment of Uighurs, arguing that strongly enforcing the norm against putting minorities in concentration camps can have broader preventative effects. Shorter summary
Nov 21, 2017
ssc
40 min 5,158 words 611 comments podcast
Scott Alexander argues against Nathan Robinson's proposal for public cafeterias, instead favoring a system of food vouchers with taxes and subsidies to promote healthy eating. Longer summary
Scott Alexander responds to Nathan Robinson's proposal for a public food option, arguing that the existing system of vouchers plus taxes and subsidies is superior. He points out that a public cafeteria system would likely become stigmatized and low-quality, while vouchers allow poor people to access the same high-quality food as everyone else. Alexander then critiques the current implementation of agricultural subsidies and dietary guidelines, showing how government mismanagement has promoted unhealthy food. He argues that both capitalism and government are 'misaligned systems' that can produce bad outcomes, and that the solution is to pit multiple systems against each other with checks and balances rather than relying solely on government control. Shorter summary
Aug 07, 2017
ssc
54 min 6,903 words 3 comments podcast
Scott Alexander critiques Adam Grant's article on gender differences in tech, arguing Grant misrepresents evidence and ignores key factors like innate interest differences between men and women. Longer summary
Scott Alexander critiques Adam Grant's article on gender differences, arguing that Grant misrepresents scientific evidence and ignores important factors like interest differences between men and women. Scott presents alternative explanations for gender imbalances in tech and other fields, emphasizing innate differences in interests rather than discrimination. He expresses concern about the hostile climate developing in tech around these issues. Shorter summary
Jan 30, 2017
ssc
2 min 173 words 21 comments podcast
Scott Alexander shares links to a debate between Gary Taubes and Stephan Guyenet about the health effects of sugar, praising the high-level discussion. Longer summary
This post discusses a debate about sugar's health effects, centered around Gary Taubes' work. Scott Alexander links to Stephan Guyenet's negative review of Taubes' book, then shares Taubes' counterargument. The debate involves multiple essays: Taubes' initial case against sugar on Cato Unbound, responses from Terence Kealy, Yoni Freedhoff, and Guyenet, and finally Taubes' rebuttal. Scott praises all participants for engaging in a high-level debate that has helped clarify his thinking on the topic. Shorter summary
Dec 30, 2016
ssc
4 min 517 words 338 comments podcast
Scott Alexander criticizes a New York Times article for misrepresenting economists' views on education vouchers, showing the data actually indicates more support than opposition among economists. Longer summary
Scott Alexander critiques a New York Times article about economists' views on education vouchers. The article claims economists generally don't support free market approaches to education, but Scott points out that the survey data cited actually shows more economists support vouchers than oppose them. He argues this misrepresentation is poor journalistic practice and hopes for a correction. Shorter summary
Dec 02, 2016
ssc
49 min 6,286 words 608 comments podcast
Scott Alexander critiques arguments against school vouchers, discussing potential efficiency gains and drawbacks of privatization in education, while proposing experimental approaches to school reform. Longer summary
Scott Alexander critiques Nathan Robinson's arguments against school vouchers, discussing the potential efficiency gains and drawbacks of privatization in education. He compares education to other sectors like healthcare and grocery stores, analyzes the rising costs in public education, and proposes experimental approaches to school reform, including a system of small, home-based schools. Shorter summary
Oct 27, 2015
ssc
23 min 2,897 words 713 comments podcast
A dialogue critiques Michael Huemer's view on objective moral truths, arguing that moral changes are driven by wealth and societal conditions rather than convergence on objective truth. Longer summary
This post presents a dialogue between Achitophel and Berenice discussing Michael Huemer's view on objective moral truths. Berenice argues against Huemer's perspective, suggesting that changes in moral values are primarily driven by increasing wealth and changing societal conditions rather than a convergence on objective moral truth. She provides examples such as changes in fashion, the impact of disease prevalence on moral foundations, and the influence of economic factors on moral decisions. Achitophel initially defends Huemer's view but gradually concedes some points to Berenice's arguments. The dialogue concludes with a discussion on whether certain moral foundations, particularly Care/Harm, might be more fundamental than others. Shorter summary
Oct 21, 2015
ssc
27 min 3,468 words 568 comments podcast
Scott critiques Simler's theory of prestige, finding it insufficient for human behavior, and proposes five alternative explanations for the phenomenon. Longer summary
Scott Alexander critiques Kevin Simler's theory of prestige as presented in 'Social Status: Down The Rabbit Hole'. Simler separates status into dominance and prestige, with prestige explained through the behavior of Arabian babblers. Scott finds this explanation insufficient for human prestige, particularly for admiration of celebrities or people we don't interact with directly. He proposes five alternative explanations for prestige: group signaling, coattail riding, prestige by association, tit for tat, and virtuous cycles. Scott concludes that prestige might not be a single phenomenon and that separating dominance from prestige is a good starting point for understanding status. Shorter summary
Oct 07, 2015
ssc
38 min 4,853 words 761 comments podcast
Scott Alexander critiques Bryan Caplan's argument that psychiatric diseases are unusual preferences rather than real illnesses, providing counterarguments and evidence to show this view is untenable. Longer summary
Scott Alexander critiques Bryan Caplan's 2006 paper arguing that psychiatric diseases are better understood as unusual preferences rather than true illnesses. Scott challenges Caplan's distinction between preferences and budgetary constraints, arguing it breaks down for complex human experiences. He provides counterexamples showing how mental illnesses can resemble physical constraints, discusses how most psychiatric patients seek help voluntarily, and examines issues with Caplan's explanations of alcoholism and schizophrenia. Scott concludes that viewing psychiatric illnesses as simply different preferences is not tenable given the evidence. Shorter summary
Aug 09, 2015
ssc
27 min 3,495 words 424 comments podcast
Scott explores the nature of scientific contrarianism, discussing how ideas spread through the scientific community and the challenges faced by both crackpots and legitimate contrarians. Longer summary
This post discusses the concept of contrarians and crackpots in science, exploring how ideas move through different levels of the scientific community. Scott examines cases like Gary Taubes and the serotonin theory of depression to illustrate how scientific consensus can differ at various levels. He proposes a pyramid model of scientific knowledge dissemination and discusses how contrarians might be skipping levels in this pyramid. The post then contrasts virtuous contrarians with crackpots, noting that the former often face indifference rather than opposition. Scott concludes by discussing paradigm shifts in science and how even correct contrarians often lose credit for their ideas. Shorter summary