How to avoid getting lost reading Scott Alexander and his 1500+ blog posts? This unaffiliated fan website lets you sort and search through the whole codex. Enjoy!

See also Top Posts and All Tags.

Minutes:
Blog:
Year:
Show all filters
13 posts found
Jul 25, 2023
acx
21 min 2,730 words 537 comments 221 likes podcast
Scott Alexander argues that intelligence is a useful, non-Platonic concept, and that this understanding supports the coherence of AI risk concerns. Longer summary
Scott Alexander argues against the claim that AI doomers are 'Platonists' who believe in an objective concept of intelligence. He explains that intelligence, like other concepts, is a bundle of useful correlations that exist in a normal, fuzzy way. Scott demonstrates how intelligence is a useful concept by showing correlations between different cognitive abilities in humans and animals. He then argues that thinking about AI in terms of intelligence has been fruitful, citing the success of approaches that focus on increasing compute and training data. Finally, he explains how this understanding of intelligence is sufficient for the concept of an 'intelligence explosion' to be coherent. Shorter summary
Aug 18, 2022
acx
14 min 1,815 words 237 comments 167 likes podcast
Scott Alexander examines why skills plateau, proposing decay and interference hypotheses to explain the phenomenon across various fields. Longer summary
Scott Alexander explores why skills plateau despite continued practice, focusing on creative artists, doctors, and formal education. He presents two main hypotheses: the decay hypothesis, where knowledge is forgotten if not regularly reviewed, and the interference hypothesis, where similar information blends together, making it difficult to learn new things in the same domain. The post discusses how these hypotheses explain various learning phenomena, including the ability to learn multiple unrelated skills simultaneously. Scott also considers edge cases and potential applications of these theories, such as mnemonic devices and language learning strategies. Shorter summary
Jun 10, 2022
acx
35 min 4,463 words 497 comments 107 likes podcast
Scott Alexander argues against Gary Marcus's critique of AI scaling, discussing the potential for future AI capabilities and the nature of human intelligence. Longer summary
Scott Alexander responds to Gary Marcus's critique of AI scaling, arguing that current AI limitations don't necessarily prove statistical AI is a dead end. He discusses the scaling hypothesis, compares AI development to human cognitive development, and suggests that 'world-modeling' may emerge from pattern-matching abilities rather than being a distinct, hard-coded function. Alexander also considers the potential capabilities of future AI systems, even if they don't achieve human-like general intelligence. Shorter summary
Feb 11, 2022
acx
27 min 3,475 words 75 comments 34 likes podcast
Scott Alexander explores expert and reader comments on his post about motivated reasoning and reinforcement learning, discussing brain function, threat detection, and the implementation of complex behaviors. Longer summary
Scott Alexander discusses comments on his post about motivated reasoning and reinforcement learning. The post covers expert opinions on brain function and reinforcement learning, arguments about long-term rewards of threat detection, discussions on practical reasons for motivated reasoning, and miscellaneous thoughts on the topic. Key points include debates on how the brain processes information, the role of Bayesian reasoning, and the challenges of implementing complex behaviors through genetic encoding. Scott also reflects on his own experiences and the limitations of reinforcement learning models in explaining human behavior. Shorter summary
Feb 01, 2022
acx
6 min 729 words 335 comments 122 likes podcast
Scott analyzes motivated reasoning as misapplied reinforcement learning, explaining how it might arise from the brain's mixture of reinforceable and non-reinforceable architectures. Longer summary
Scott explores the concept of motivated reasoning as misapplied reinforcement learning in the brain. He contrasts behavioral brain regions that benefit from hedonic reinforcement learning with epistemic regions where such learning would be detrimental. The post discusses how this distinction might explain phenomena like 'ugh fields' and motivated reasoning, especially in novel situations like taxes or politics where brain networks might be placed on a mix of reinforceable and non-reinforceable architectures. Scott suggests this model could explain why people often confuse what is true with what they want to be true. Shorter summary
Nov 13, 2019
ssc
19 min 2,442 words 212 comments podcast
Scott Alexander examines the paradoxical relationship between autism and intelligence, discussing genetic and environmental factors, and proposing explanatory models for the observed lower IQ in autistic individuals despite genetic links to higher intelligence. Longer summary
Scott Alexander explores the paradoxical relationship between autism and intelligence. While genetic studies show a link between autism risk genes and high IQ, autistic individuals generally have lower intelligence than neurotypical controls. The post discusses three main causes of autism: common 'familial' genes that increase IQ, rare 'de novo' mutations that are often detrimental, and non-genetic factors like obstetric complications. Scott examines various studies and proposes that even after adjusting for mutations and environmental factors, autism still seems to decrease IQ. He introduces a 'tower-vs-foundation' model to explain this phenomenon, where intelligence needs a strong foundation to support it, and an imbalance can lead to autism. The post concludes with a list of findings and their associated confidence levels. Shorter summary
Mar 20, 2019
ssc
11 min 1,386 words 103 comments podcast
Scott Alexander argues that Free Energy/Predictive Coding and Perceptual Control Theory are fundamentally the same, and proposes using PCT's more intuitive terminology to help understand FE/PC. Longer summary
Scott Alexander compares two theories of cognition and behavior: Free Energy/Predictive Coding (FE/PC) and Perceptual Control Theory (PCT). He argues that while they've developed differently, their foundations are essentially the same. Scott suggests that understanding PCT, which he finds more intuitive, can help in grasping the more complex FE/PC. He provides a glossary of equivalent terms between the two theories and gives examples to illustrate how PCT's terminology often makes more intuitive sense. The post concludes by discussing why FE/PC is more widely used despite PCT's advantages in explaining certain phenomena, and suggests teaching both terminologies to aid understanding. Shorter summary
Jan 10, 2019
ssc
5 min 552 words 58 comments podcast
Scott Alexander presents a 'Grand Unified Chart' showing how different domains of knowledge share a similar structure in interpreting the world, arguing this is due to basic brain algorithms and effective epistemology. Longer summary
Scott Alexander draws parallels between different domains of knowledge, showing how they all share a similar structure in interpreting the world. He presents a 'Grand Unified Chart' that compares Philosophy of Science, Bayesian Probability, Psychology, Discourse, Society, and Neuroscience. Each domain is broken down into three components: pre-existing ideas, discrepancies, and actual experiences. Scott argues that this structure is universal because it's built into basic brain algorithms and is the most effective way to do epistemology. He emphasizes that the interaction between facts and theories is bidirectional, and that theory change is a complex process resistant to simple contradictions. Shorter summary
Dec 25, 2017
ssc
13 min 1,690 words 191 comments podcast
Scott Alexander preregisters hypotheses for the 2018 SSC Survey, planning to explore relationships between perception, cognition, personality, and demographics. Longer summary
Scott Alexander preregisters his hypotheses for the 2018 SSC Survey. He plans to investigate various relationships between perception, cognition, personality traits, and demographic factors. Key areas of focus include replicating previous findings on perception and cognition, exploring concepts like 'first sight and second thoughts' and 'ambiguity tolerance', investigating birth order effects, and examining correlations with autism, political views, and sexual harassment. He also plans to follow up on a previous AI risk persuasion experiment. Shorter summary
Sep 18, 2017
ssc
35 min 4,534 words 333 comments podcast
Scott Alexander reviews 'Mastering The Core Teachings Of The Buddha', a practical guide to Buddhist meditation that details the stages of insight and debunks common myths about enlightenment. Longer summary
Scott Alexander reviews 'Mastering The Core Teachings Of The Buddha' by Daniel Ingram, an emergency physician who claims to have achieved enlightenment. The book provides a practical, no-nonsense approach to Buddhist meditation, detailing the stages of insight and their effects. It breaks down Buddhism into three teachings: morality, concentration, and wisdom. The review explores the book's explanation of meditation techniques, the stages of insight (including the challenging 'Dark Night of the Soul'), and the nature of enlightenment. Scott also discusses the book's debunking of common myths about enlightenment and questions why one would pursue this path given its potential difficulties. The review concludes by drawing parallels between the book's descriptions of meditation experiences and concepts from cognitive science. Shorter summary
Sep 06, 2017
ssc
8 min 961 words 78 comments podcast
Scott Alexander explores the similarities between Predictive Processing and Perceptual Control Theory, arguing that PCT anticipates many aspects of PP and deserves recognition for its insights. Longer summary
Scott Alexander draws parallels between Predictive Processing (PP) and Perceptual Control Theory (PCT), suggesting that PCT anticipates many aspects of PP. He argues that both theories share the concept of cognitive 'layers' acting at various levels, with upper layers influencing lower layers to produce desired stimuli. Scott notes that PP offers a more refined explanation for higher-level cognitive processes compared to PCT's sometimes overly simplistic model. He concludes by comparing Will Powers, the originator of PCT, to ancient Greek atomists like Epicurus, suggesting that Powers' work deserves recognition for its prescient insights, even if it has been superseded by more advanced theories. Shorter summary
Sep 05, 2017
ssc
51 min 6,598 words 271 comments podcast
Scott Alexander reviews 'Surfing Uncertainty' by Andy Clark, exploring the predictive processing model of brain function and its wide-ranging explanatory power. Longer summary
Scott Alexander reviews the book 'Surfing Uncertainty' by Andy Clark, which explains the predictive processing model of how the brain works. This model posits that the brain is constantly making predictions about sensory input and updating its models based on prediction errors. Scott explores how this theory can explain various phenomena like attention, imagination, learning, motor behavior, and even psychiatric conditions like autism and schizophrenia. He finds the model compelling and potentially explanatory for a wide range of cognitive and perceptual processes. Shorter summary
Aug 05, 2014
ssc
8 min 931 words 71 comments podcast
Scott Alexander suggests humans have 'negative creativity' due to cognitive 'ruts', and explores ways to escape these ruts, arguing that AI might have an advantage in creative thinking. Longer summary
Scott Alexander explores the concept of creativity, suggesting that humans have 'negative creativity' due to their brains being designed to stay in cognitive 'ruts'. He proposes that dreams, drugs, mishearing others, and metaphors are ways to escape these ruts and generate novel ideas. The post discusses examples of scientific discoveries made through dreams or drug use, and explains how adding 'noise' to thought processes might inspire creativity. Scott argues that AI might actually have an advantage in creativity, as they wouldn't have the built-in limitations humans do, and might be able to generate truly random ideas more easily. Shorter summary