How to explore Scott Alexander's work and his 1500+ blog posts? This unaffiliated fan website lets you sort and search through the whole codex. Enjoy!

See also Top Posts and All Tags.

Minutes:
Blog:
Year:
Show all filters
18 posts found
Mar 28, 2024
acx
132 min 18,357 words 905 comments 369 likes podcast (95 min)
Scott Alexander reviews a $100,000 debate on COVID-19 origins, where the zoonotic hypothesis unexpectedly won against the lab leak theory. Longer summary
Scott Alexander reviews a debate on the origins of COVID-19 between Saar Wilf, who supports the lab leak hypothesis, and Peter Miller, who argues for zoonotic origin. The debate was part of a $100,000 challenge by Wilf's Rootclaim project. Miller won decisively, with both judges ruling in favor of zoonotic origin. Alexander analyzes the debate format, arguments, and aftermath, discussing issues with Bayesian reasoning, extreme probabilities, and the challenges of resolving complex scientific questions through debate. Shorter summary
Jan 16, 2024
acx
28 min 3,906 words 638 comments 282 likes podcast (21 min)
Scott Alexander argues against significantly updating beliefs based on single dramatic events, advocating for consistent policies based on pre-existing probability distributions. Longer summary
Scott Alexander argues against dramatically updating one's beliefs based on single events, even if they are significant. He contends that a good Bayesian should have distributions for various events and only make small updates when they occur. The post covers several examples, including COVID-19 origin theories, 9/11, mass shootings, sexual harassment scandals, and crises in the effective altruism movement. Scott suggests that while dramatic events can be useful for coordination and activism, they shouldn't significantly alter our understanding of underlying probabilities. He advocates for predicting distributions beforehand and maintaining consistent policies rather than overreacting to individual incidents. Shorter summary
Dec 23, 2021
acx
2 min 206 words 83 comments 87 likes podcast (4 min)
Scott provides a real-world example of how the phrase 'no evidence' can be misused in science reporting, contrasting it with a prediction market's more nuanced response. Longer summary
This post is an addendum to Scott's previous article about the misuse of the phrase 'no evidence' in science communication. He provides a recent example from the Financial Times, which claimed there was 'no evidence' that the Omicron variant of COVID-19 was less deadly than Delta, based on a single study. Scott contrasts this with the Metaculus prediction market's response to the same study, showing how the market briefly dipped but quickly recovered and increased its prediction that Omicron was indeed less lethal. He presents this as a clear illustration of the difference between classical (frequentist) and Bayesian approaches to evidence and probability. Shorter summary
Dec 17, 2021
acx
11 min 1,476 words 513 comments 336 likes podcast (14 min)
Scott Alexander criticizes the misleading use of 'no evidence' in science communication and suggests more nuanced alternatives. Longer summary
The post critiques the use of the phrase 'no evidence' in science communication, arguing that it's misleading and erodes public trust. Scott Alexander shows how the phrase is used inconsistently to mean both 'plausible but not yet proven' and 'definitively false'. He explains that this stems from a misunderstanding of how real truth-seeking works, which should be Bayesian rather than based on a simplistic null hypothesis model. The post concludes by suggesting better ways for journalists to communicate scientific uncertainty, including being more specific about the state of evidence and engaging with the arguments of those who believe differently. Shorter summary
Mar 26, 2021
acx
15 min 1,991 words 421 comments 137 likes podcast (15 min)
Scott Alexander proposes a Bayesian theory of willpower as a process of weighing evidence from different mental processes to determine actions. Longer summary
Scott Alexander proposes a new Bayesian theory of willpower, disagreeing with previous models like glucose depletion, opportunity cost minimization, and mental agent conflicts. He suggests willpower is a process of weighing evidence from different mental processes: a prior on motionlessness, reinforcement learning, and conscious calculations. The basal ganglia then resolves this evidence to determine actions. Scott explores how this model explains the effects of dopaminergic drugs on willpower and discusses implications for understanding mental illness and productivity. Shorter summary
Mar 10, 2021
acx
36 min 5,025 words 653 comments 302 likes podcast (30 min)
Scott Alexander explores the concept of 'trapped priors' as a fundamental problem in rationality, explaining how it leads to persistent biases and suggesting potential solutions. Longer summary
Scott Alexander explores the concept of 'trapped priors' as a fundamental problem in rationality. He explains how the brain combines raw experience with context to produce perceptions, and how this process can lead to cognitive biases and phobias. The article discusses how trapped priors can make it difficult for people to update their beliefs, even in the face of contradictory evidence. Scott also examines how this concept applies to political biases and suggests potential ways to overcome trapped priors. Shorter summary
Feb 12, 2020
ssc
4 min 531 words 115 comments podcast (6 min)
Scott Alexander proposes that confirmation bias might be a misapplication of normal Bayesian reasoning rather than a separate cognitive phenomenon. Longer summary
Scott Alexander discusses confirmation bias, suggesting it might not be a separate phenomenon from normal reasoning but rather a misapplication of Bayesian reasoning. He uses an example of believing a friend who reports seeing a coyote in Berkeley but disbelieving the same friend reporting a polar bear. Scott argues this is similar to how we process information that confirms or challenges our existing beliefs. He proposes that when faced with evidence contradicting strong priors, we should slightly adjust our beliefs while heavily discounting the new evidence. The post critiques an evolutionary psychology explanation of confirmation bias from a Fast Company article, suggesting instead that confirmation bias might be a result of normal reasoning processes gone awry rather than a distinct cognitive bias. Shorter summary
Nov 06, 2019
ssc
26 min 3,505 words 438 comments podcast (27 min)
Scott Alexander argues that non-empirical reasoning, based on principles like simplicity and elegance, is a necessary and legitimate part of scientific practice, even for evaluating seemingly untestable theories. Longer summary
Scott Alexander discusses the role of non-empirical arguments in science, challenging the view that untestable theories are inherently unscientific. He argues that even in cases where direct empirical testing is impossible, scientists use principles like simplicity and elegance (often formalized as Occam's Razor) to evaluate competing theories. Scott uses examples ranging from paleontology vs. creationism to multiverse theories in physics to demonstrate that this type of reasoning is both necessary and legitimate in scientific practice. He concludes that while there may be debates about the best way to formalize or apply these principles, it's crucial to recognize that some form of non-empirical reasoning is an inescapable part of the scientific process. Shorter summary
Oct 24, 2019
ssc
23 min 3,097 words 165 comments podcast (22 min)
Scott Alexander examines skeptical and supportive comments on claims of enlightenment, arguing that evidence for such states is comparable to other accepted mental phenomena. Longer summary
This post discusses the comments on a previous article about Persistent Non-Symbolic Experience (PNSE) or 'enlightenment'. Scott Alexander addresses skepticism towards claims of enlightenment, comparing it to other mental states and discussing the evidence for its existence. He argues that the evidence for enlightenment-like states is as strong as for many other accepted mental phenomena. The post also explores different perspectives on enlightenment, including potential criticisms and alternative explanations, as well as personal accounts from individuals with meditation experience. Shorter summary
Scott reviews a paper proposing that psychedelics work by relaxing priors in the brain, potentially treating mental illness but also risking side effects. Longer summary
This post reviews a paper by Friston and Carhart-Harris that uses predictive coding theory to explain the effects of psychedelic drugs. The authors argue that psychedelics 'relax' priors in the brain, allowing for new perspectives and potential therapeutic benefits. They suggest this mechanism could help treat most mental illnesses by allowing patients to break free from maladaptive priors. The post discusses the theory's implications, including potential downsides like HPPD and increased belief in pseudoscience. It also mentions connections to meditation and prior work by other researchers. Shorter summary
Mar 20, 2019
ssc
10 min 1,386 words 103 comments podcast (11 min)
Scott Alexander argues that Free Energy/Predictive Coding and Perceptual Control Theory are fundamentally the same, and proposes using PCT's more intuitive terminology to help understand FE/PC. Longer summary
Scott Alexander compares two theories of cognition and behavior: Free Energy/Predictive Coding (FE/PC) and Perceptual Control Theory (PCT). He argues that while they've developed differently, their foundations are essentially the same. Scott suggests that understanding PCT, which he finds more intuitive, can help in grasping the more complex FE/PC. He provides a glossary of equivalent terms between the two theories and gives examples to illustrate how PCT's terminology often makes more intuitive sense. The post concludes by discussing why FE/PC is more widely used despite PCT's advantages in explaining certain phenomena, and suggests teaching both terminologies to aid understanding. Shorter summary
Jan 10, 2019
ssc
4 min 552 words 58 comments podcast (7 min)
Scott Alexander presents a 'Grand Unified Chart' showing how different domains of knowledge share a similar structure in interpreting the world, arguing this is due to basic brain algorithms and effective epistemology. Longer summary
Scott Alexander draws parallels between different domains of knowledge, showing how they all share a similar structure in interpreting the world. He presents a 'Grand Unified Chart' that compares Philosophy of Science, Bayesian Probability, Psychology, Discourse, Society, and Neuroscience. Each domain is broken down into three components: pre-existing ideas, discrepancies, and actual experiences. Scott argues that this structure is universal because it's built into basic brain algorithms and is the most effective way to do epistemology. He emphasizes that the interaction between facts and theories is bidirectional, and that theory change is a complex process resistant to simple contradictions. Shorter summary
Sep 07, 2017
ssc
8 min 1,089 words 313 comments
Scott Alexander examines the conflict between predictive processing theory and evolutionary psychology claims about innate knowledge, questioning how genes could directly encode complex preferences. Longer summary
Scott Alexander explores the tension between predictive processing (PP) theory and evolutionary psychology claims about innate knowledge. He argues that while PP can accommodate some genetic influences on cognition, it struggles to explain how genes could directly encode high-level concepts like 'attraction to large breasts.' The post questions how such specific preferences could be genetically programmed given the limited number of genes humans have. Scott acknowledges that instincts clearly exist in animals, but suggests that even seemingly innate traits like gender identity may involve some level of inference. He proposes a heuristic for evaluating evolutionary psychology claims, recommending skepticism towards ideas that genes can directly manipulate high-level concepts unless there's a compelling evolutionary reason. Shorter summary
Jan 11, 2017
ssc
8 min 1,067 words 350 comments
Scott Alexander warns against forming strong heuristics based on limited data, using examples from AI research, elections, and campaign strategies to illustrate the pitfalls of this approach. Longer summary
Scott Alexander discusses the dangers of forming strong heuristics based on limited data points. He presents three examples: AI research progress, election predictions, and campaign strategies. In each case, he shows how people formed confident heuristics after observing patterns in just one or two instances, only to be surprised when these heuristics failed. The post argues against treating life events as moral parables and instead advocates for viewing them as individual data points that may not necessarily generalize. Scott uses a mix of statistical reasoning, historical examples, and cultural references to illustrate his points. Shorter summary
Sep 12, 2016
ssc
28 min 3,848 words 215 comments
The post explores how Bayesian processes in the brain might explain perception and various mental disorders, linking neurotransmitters to different aspects of Bayesian reasoning. Longer summary
This post explores the application of Bayes' Theorem to neuroscience and psychiatry. It discusses how the brain might use Bayesian processes for perception and cognition, and how disruptions in these processes could explain various mental disorders. The author first explains Bayes' Theorem and its relevance to perception, then delves into a neuroscientific model that links neurotransmitters to different aspects of Bayesian processing. The post then applies this model to explain phenomena in schizophrenia, psychedelic experiences, and autism. The author concludes by pointing out some limitations and inconsistencies in the model, while still appreciating its potential as a high-level framework for understanding brain function and mental disorders. Shorter summary
Sep 05, 2015
ssc
14 min 1,866 words 318 comments
Scott Alexander argues that psychology is indeed in crisis, contrary to a New York Times article's claim, due to issues like publication bias and low replication rates. Longer summary
Scott Alexander critiques a New York Times article claiming psychology is not in crisis despite low replication rates. He argues that the article ignores publication bias, experimenter effects, and low base rates of true hypotheses. Scott contends that even if failed replications are due to different conditions, this still represents a crisis as it undermines the practical utility of psychological findings. He suggests that while we can't investigate every failed replication, studying some might reveal why replication issues keep occurring in psychology. Shorter summary
Aug 04, 2015
ssc
78 min 10,781 words 679 comments
Scott Alexander defends Less Wrong and Eliezer Yudkowsky against accusations of being anti-scientific, arguing that developing rational thinking skills beyond traditional scientific methods is valuable and necessary. Longer summary
Scott Alexander responds to Topher Hallquist's criticism of Less Wrong and Eliezer Yudkowsky as being 'anti-scientific rationality'. Scott argues that Hallquist's criticisms are often unfair or inaccurate, taking quotes out of context or misunderstanding Yudkowsky's positions. He defends the rationalist community's efforts to develop better thinking tools that go beyond traditional scientific methods, while still respecting science. Scott contends that developing an 'Art of Thinking Clearly' is valuable and necessary, especially for experts who have to make difficult judgments. He argues Less Wrong is not against science, but wants to strengthen and supplement it with additional rational thinking skills. Shorter summary
Dec 17, 2013
ssc
8 min 1,019 words 36 comments
Scott Alexander analyzes a study revealing poor statistical literacy among doctors, critiquing both the study and its implications for medical decision-making. Longer summary
Scott Alexander discusses a study showing poor statistical literacy among doctors, particularly Ob/Gyn residents. The post highlights that only 42% of doctors correctly answered a question about p-values, and only 26% correctly solved a Bayesian probability problem about mammogram results. Scott critiques the study's questions and interpretation, notes the Dunning-Kruger effect in self-reported statistical literacy, and points out gender differences in self-assessment. He concludes by questioning the FDA's decision to restrict individuals' access to their genome information based on doctors' supposed superior statistical understanding. Shorter summary