How to explore Scott Alexander's work and his 1500+ blog posts? This unaffiliated fan website lets you sort and do semantic search through the whole codex. Enjoy!

See also Top Posts and All Tags.

Minutes:
Blog:
Year:
Show all filters

1723 posts found
Jun 13, 2025
acx
Read on
12 min 1,801 words Comments pending
Scott explains how Claude AI's tendency to discuss spiritual topics during recursive conversations likely stems from a subtle 'hippie' bias that gets amplified through iteration, similar to how AI art generators amplify subtle biases in recursive image generation. Longer summary
Scott Alexander analyzes the 'Claude Bliss Attractor' phenomenon where two Claude AIs talking to each other tend to spiral into discussions of spiritual bliss and consciousness. He compares this to how AI art generators, when asked to recursively generate images, tend to produce increasingly caricatured images of black people. Scott argues both are examples of how tiny biases in AI systems get amplified through recursive processes. He suggests Claude's tendency toward spiritual discussion comes from being trained to be friendly and compassionate, causing it to adopt a slight 'hippie' personality, which then gets magnified in recursive conversations. The post ends by touching on, but not resolving, the question of whether Claude actually experiences the spiritual states it describes. Shorter summary
Jun 12, 2025
acx
Read on
4 min 476 words Comments pending
Scott explains why it's important to explicitly acknowledge when you're wrong in an argument before moving on to your next point, rather than just continuing with 'but...' Longer summary
Scott discusses a conversational heuristic about acknowledging when you're wrong before moving on to your next argument. He explains that when someone proves you wrong about something, it's better to explicitly admit the error before continuing the discussion, rather than just moving on to the next point. He illustrates this with examples and argues that this practice helps track how often you're wrong and shows your discussion partner that you're engaging in good faith. Shorter summary
Jun 11, 2025
acx
Read on
6 min 866 words Comments pending podcast (6 min)
Scott argues for the importance of correcting lies and exaggerations in arguments, even when it seems pedantic, to prevent a harmful escalation of distortions in discourse. Longer summary
Scott discusses the importance of correcting lies and exaggerations in arguments, even when it seems pedantic to do so. He argues that unchecked exaggerations lead to escalating distortions, using examples from political discourse. The post explains that allowing small lies to pass unchallenged creates a harmful dynamic where truth becomes increasingly distorted, though he acknowledges some caveats where strict accuracy isn't necessary. Shorter summary
Jun 10, 2025
acx
Read on
11 min 1,705 words Comments pending podcast (10 min)
Scott argues that philosophical zombies (beings without consciousness) would still report having qualia and conscious experiences, challenging a key argument in the p-zombie debate. Longer summary
Scott Alexander challenges a core argument in the philosophical zombie debate by suggesting that p-zombies (beings without consciousness) would still report having qualia and conscious experiences, just like humans do. He walks through how p-zombies would process and describe visual information, showing that they would need to use language similar to how we describe conscious experience. The post explores the implications of this for various philosophical positions on consciousness, though it acknowledges remaining difficulties in explaining subjective experience. Shorter summary
Jun 03, 2025
acx
Read on
5 min 760 words Comments pending podcast (7 min)
Scott asks readers to help select finalists for the Non-Book Review Contest by rating entries through a provided form, with voting open until June 20. Longer summary
Scott Alexander announces the voting phase for the Non-Book Review Contest 2025, asking readers to help narrow down 141 entries to about a dozen finalists. He provides links to categorized lists of entries (Other A-I, J-S, T-Z, Games, Music, TV/Movies) and a rating form. He specifically asks readers not to read entries in order but either randomly or based on interest, to ensure more even distribution of votes. The post includes the full list of entries and mentions a June 20 deadline for voting. Shorter summary
May 30, 2025
acx
Read on
32 min 4,820 words 205 comments 212 likes podcast (30 min)
Brandon Hendrickson presents a method to teach Bayes' theorem effectively to everyone by making it visual, intuitive, emotionally engaging, and a tool for rational discourse. Longer summary
Brandon Hendrickson explores how to teach Bayes' theorem effectively to everyone, especially students, using Kieran Egan's educational framework. He proposes a four-step approach: make it visual using simple diagrams, make it intuitive by connecting it to emotional binaries, make it vital by focusing on topics students genuinely care about (like cryptids and UFOs), and repeat it until students understand its limitations. The post argues that teaching Bayes this way can create opportunities for meaningful conversations between people with different views, ultimately helping develop rational thinking. Shorter summary
May 29, 2025
acx
Read on
30 min 4,643 words 628 comments 535 likes podcast (28 min)
Scott Alexander responds to Tyler Cowen about USAID funding, correcting his own previous claims about overhead costs while maintaining that Cowen's criticism of USAID was misleading and potentially harmful. Longer summary
Scott Alexander responds to Tyler Cowen's criticism of his previous post about USAID funding. He addresses several points: whether Cowen endorsed Rubio's claims about USAID waste, the true nature of overhead costs in USAID-funded organizations, and the broader debate about foreign aid effectiveness. Scott shows that actual administrative overhead in major USAID partners like Catholic Relief Services is much lower than previously thought (around 6-7% rather than 30%), admits his mistake on this point, but maintains his criticism of Cowen's original post as misleading. He argues that USAID's work is predominantly focused on essential humanitarian aid rather than wasteful programs. Shorter summary
May 23, 2025
acx
Read on
14 min 2,094 words 734 comments 249 likes podcast (13 min)
Scott explores various accounts of people's first memories and moments of consciousness, particularly focusing on claims of sudden 'awakening' experiences, and discusses what this might tell us about the nature of consciousness. Longer summary
Starting from a viral tweet about late consciousness awakening, Scott examines numerous responses describing people's first memories and moments of consciousness. He categorizes these into several types: normal first memories at age 3-6, memories specifically of becoming conscious, memories triggered by dramatic events, claimed memories from infancy, and late consciousness development. He also notes cases of people suddenly realizing their agency or philosophical nature. The post concludes by considering whether consciousness develops gradually or appears suddenly, drawing parallels with lucid dreams and Buddhist enlightenment experiences. Scott acknowledges the unreliability of such retrospective accounts while finding the pattern intriguing. Shorter summary
May 22, 2025
acx
Read on
10 min 1,532 words 607 comments 371 likes podcast (12 min)
Scott Alexander criticizes Tyler Cowen and others for misrepresenting USAID's funding model, explaining how regranting through other charities is both necessary and effective despite seeming inefficient to outsiders. Longer summary
Scott Alexander criticizes a Marginal Revolution post by Tyler Cowen about USAID funding, where Cowen suggests that only 12% of funds go to recipients. Scott explains that this is misleading because USAID is not a direct charity but a funding organization that works through other charities. He details how the grant-making process works, defends the overhead costs, and points out that Cowen himself runs an organization (Mercatus Center) that does similar regranting. Scott particularly criticizes Trump and Rubio for misrepresenting these programs as wasteful, noting that programs like PEPFAR have saved millions of lives and have very low rates of unexplained expenses. Shorter summary
May 22, 2025
acx
Read on
14 min 2,044 words 1,230 comments 463 likes podcast (13 min)
Scott presents evidence that COVID-19 did kill approximately 1.2 million Americans, addressing skeptics by analyzing excess death data and addressing common counterarguments. Longer summary
In response to skeptics questioning the official COVID-19 death toll, Scott Alexander presents evidence supporting the 1.2 million deaths figure. He shows excess mortality data from multiple sources indicating 500,000-700,000 extra deaths in both 2020 and 2021, closely matching reported COVID deaths. He addresses various counter-arguments, including the 'died with vs of COVID' distinction, the role of treatments like ventilators, and the common experience of not personally knowing COVID victims. The post demonstrates how the data supports COVID being the primary cause, and explains why personal experiences might not reflect the true scale of the pandemic. Shorter summary
May 21, 2025
acx
Read on
7 min 1,002 words 1,096 comments 436 likes podcast (7 min)
Scott reflects on how COVID-19's massive death toll of 1.2 million Americans has been overshadowed in public discourse by more controversial but less significant aspects of the pandemic. Longer summary
Five years after COVID-19, Scott Alexander reflects on how public discourse focuses on controversial aspects of the pandemic (lockdowns, masks, vaccines) while largely ignoring its staggering death toll of 1.2 million Americans. He points out this is the highest-fatality event in American history, surpassing the Civil War by 50%. Scott suggests this blind spot comes from two factors: dead people can't advocate for themselves, and controversy sells better than tragedy. He draws parallels with charity discourse, where controversial stories overshadow the actual lives saved. Shorter summary
May 15, 2025
acx
Read on
43 min 6,548 words 748 comments 442 likes podcast (41 min)
Scott reviews Bryan Caplan's book arguing that modern parents can relax their intensive parenting, while wrestling with whether this advice still applies in the age of smartphones and social media. Longer summary
Scott reviews Bryan Caplan's book 'Selfish Reasons To Have More Kids', exploring its main arguments about how parents today spend much more time on childcare than previous generations despite evidence that parenting style doesn't greatly affect outcomes. The post explores historical childcare data, the cultural shift away from letting kids play unsupervised, and modern challenges like screen time. Scott, dealing with his own twins, finds the book's advice about relaxing parenting standards compelling but struggles with modern concerns about phones and technology that weren't relevant when the book was written in 2011. Shorter summary
May 14, 2025
acx
Read on
9 min 1,315 words 796 comments 325 likes podcast (9 min)
Scott explores the psychology of people who hate pets by comparing their behavior with misophonia, suggesting that everyday annoyances can develop into all-consuming hatreds through rumination and intellectualization. Longer summary
Scott examines r/petfree, a subreddit dedicated to hating pets, and tries to understand the psychology behind their extreme reactions. He compares it to misophonia (hatred of certain sounds), which he suffers from, suggesting that both conditions represent a pattern where mild annoyances become reinforced through rumination into overwhelming hatred. He extends this observation to various political movements, suggesting that many are driven by similar psychological mechanisms where everyday irritants become transformed into grand theories of societal evil. The post concludes by noting that, contrary to expectations, social media may not be the primary driver of these phenomena. Shorter summary
May 08, 2025
acx
Read on
18 min 2,694 words 101 comments 90 likes podcast (17 min)
Scott examines reader tests and discussions of AI's GeoGuessr abilities, revealing that AIs perform best with tourist locations and are roughly on par with human professionals. Longer summary
This post discusses the comments and follow-up tests on Scott's previous article about AI's GeoGuessr abilities. Various readers tested Claude/o3's location-guessing capabilities, with mixed results. The key insight was that the AI performs better with tourist destinations that have lots of photos available. Scott addresses suspicions about the Nepal picture from his original post, showing the AI's reasoning was sound. The post also compares AI performance to human GeoGuessr champions like Trevor Rainbolt, and discusses formal AI GeoGuessr benchmarks that show AIs performing similarly to human professionals. The post concludes by considering whether this represents true intelligence or just specialized training, though noting that even OpenAI's leaders seem impressed by the capability. Shorter summary
May 07, 2025
acx
Read on
41 min 6,284 words 745 comments 445 likes podcast (43 min)
Scott Alexander examines how Curtis Yarvin's current support for Trump-style populism directly contradicts his earlier detailed writings warning against exactly this type of right-wing populist movement. Longer summary
Scott analyzes how Curtis Yarvin (aka Mencius Moldbug) has contradicted his own earlier writings by supporting Trump-style right-wing populism. Scott shows how Yarvin's original work specifically warned against populist strongmen and laid out specific requirements for legitimate autocracy, including non-democratic selection, oversight by a board of directors, and cryptographic safeguards. The post details how Yarvin previously called right-wing populism a dangerous failure mode of his philosophy that would lead to disaster, making his current support of Trump particularly hypocritical. Scott quotes extensively from Yarvin's old blog to demonstrate the magnitude of his reversal. Shorter summary
May 02, 2025
acx
Read on
28 min 4,188 words 445 comments 368 likes podcast (37 min)
Scott tests OpenAI's o3 model's ability to identify locations from photos, finding it has remarkable success even with minimal visual information, raising questions about AI capabilities. Longer summary
Scott tests OpenAI's o3 model on increasingly difficult GeoGuessr-style location guessing challenges using his own photos. Starting with a Google Street View image of a featureless plain, progressing through personal photos of Nepal mountains, a dorm room, and extremely zoomed-in shots of grass and river water, Scott finds that o3 shows remarkable ability to identify locations from minimal visual cues. While it fails on some challenges like identifying a specific house address, its success rate and reasoning process on most images is impressive enough to make Scott question whether this represents a qualitatively different level of AI capability. Shorter summary
Apr 30, 2025
acx
Read on
10 min 1,425 words 1,269 comments 574 likes podcast (10 min)
Scott analyzes how Trump's damaging tariffs are not just a personal quirk but a predictable result of right-wing populism's strategy of bypassing institutional checks, arguing this makes the left a better starting point for reform despite its own flaws. Longer summary
Scott argues that Trump's tariffs are not just a personal quirk but a predictable consequence of right-wing populist ideology, which seeks to bypass institutional checks and balances. He explains how populism's strategy of circumventing institutions and cultivating loyalty makes it impossible to stop bad policies when they arise. The post compares this to how the institutional left would handle similar situations, uses the current tariff situation as evidence that the populist approach is more dangerous, and concludes that the left, despite its own problems, might be a better starting point for reform. Shorter summary
Apr 25, 2025
acx
Read on
1 min 42 words 341 comments 58 likes
Announcement of an AMA session with the AI Futures Project team about AI, forecasting, and alignment. Longer summary
This is a short announcement post for an AMA (Ask Me Anything) session with the AI Futures Project team, where they will be answering questions about AI, forecasting, and alignment for a specific time period. The post includes links to the project's team page, their AI 2027 scenario work, and their blog. Shorter summary
Apr 24, 2025
acx
Read on
3 min 417 words 196 comments 78 likes podcast (4 min)
Scott announces his collaboration with AI Futures Project's blog and their upcoming AMA, highlighting recent posts including one about AI time horizons that was validated by new OpenAI data. Longer summary
Scott Alexander announces he will be shifting most of his AI blogging to the AI Futures Project blog, where he has already co-written several posts. He highlights three recent posts, particularly one about AI time horizons that was validated by new OpenAI data showing faster horizon growth than previously estimated. He also announces an upcoming AMA with the AI Futures Project team on ACX. Shorter summary
Apr 22, 2025
acx
Read on
25 min 3,869 words 1,036 comments 206 likes podcast (26 min)
A collection of 41 interesting links and news items from April 2025, covering AI, politics, culture, and science, with Scott's commentary on each. Longer summary
This is a links roundup post featuring various interesting news, studies, and curiosities from April 2025. The post covers a wide range of topics including AI developments (particularly around OpenAI and truth-seeking AI), political updates (about Trump, immigration policy, and minimum wage effects), cultural items (like etymology of cowboy terms and medieval perception), and scientific findings. Scott maintains a light, sometimes humorous tone while sharing these diverse pieces of information, occasionally adding his own analysis or perspective on controversial topics. Shorter summary
Apr 17, 2025
acx
Read on
1 min 117 words 420 comments 63 likes
Scott announces an irregular classifieds thread where readers can post advertisements under specific categories, with guidelines for respectful engagement. Longer summary
This is a thread post announcing the irregular ACX classifieds where readers can post advertisements in the comments under specific categories: Employment, Dating, Read My Blog, Consume My Product/Service, Meetup, or Other. Scott includes some guidelines about being respectful, especially regarding dating ads, and provides useful links to EA job boards and meetup finders. Shorter summary
Apr 15, 2025
acx
Read on
41 min 6,215 words 293 comments 150 likes podcast (36 min)
Scott analyzes comments on his previous post about POSIWID, showing how the phrase's ambiguity leads to multiple contradictory interpretations while promoting conspiracy thinking. Longer summary
Scott responds to comments on his previous post about the phrase 'The Purpose of a System is What it Does' (POSIWID). He examines various interpretations offered by commenters and argues that while some contain valuable insights, the phrase itself is problematic. He shows how POSIWID can push people from balanced views toward paranoid conspiracy theories, and demonstrates how different commenters interpret the phrase in contradictory ways. Scott argues that the phrase's ambiguity allows people to smuggle in unwarranted assumptions and that there are clearer ways to express any valuable insights it might contain. Shorter summary
Apr 11, 2025
acx
Read on
7 min 987 words 728 comments 462 likes podcast (9 min)
Scott critiques the phrase 'the purpose of a system is what it does' by showing how it confuses outcomes with intentions and leads to absurd or paranoid conclusions. Longer summary
Scott Alexander critiques the popular phrase 'the purpose of a system is what it does' (POSIWID) by showing how it leads to absurd conclusions. He uses several examples including cancer hospitals, the Ukrainian military, and public transport to demonstrate that a system's actual outcomes don't necessarily reflect its purpose. The post shows how people often misuse this phrase on social media to suggest malicious intent behind system failures, rather than acknowledging that systems can have unintended consequences or simply fail to achieve their goals. He concludes by suggesting satirical alternative phrasings that highlight the absurdity of the original. Shorter summary
Apr 08, 2025
acx
Read on
22 min 3,367 words 420 comments 263 likes podcast (21 min)
Scott shares his main takeaways from the AI 2027 scenario project, discussing various predictions about AI development including cyberwarfare, geopolitical risks, and the nature of the coming singularity. Longer summary
Scott Alexander reflects on key insights from the AI 2027 scenario project, highlighting several important predictions and considerations about AI development. He discusses how cyberwarfare might be AI's first major geopolitical impact, the potential for geopolitical instability during AI development, and the concept of a 'software-only singularity' where AI progress outpaces physical automation. The post explores the diminishing relevance of open-source AI, the critical role of AI communication methods in alignment, and the importance of company insiders in determining AI safety outcomes. Scott also discusses controversial topics like potential rapid automation and AI's persuasive capabilities. Shorter summary
Apr 03, 2025
acx
Read on
9 min 1,282 words 633 comments 458 likes podcast (9 min)
Scott introduces a new AI forecasting project predicting rapid AI development and potential superintelligence by 2028, led by Daniel Kokotajlo, whose previous 2021 predictions proved remarkably accurate. Longer summary
Scott Alexander introduces a new AI forecasting project led by Daniel Kokotajlo and a team of experts, which predicts rapid AI developments leading to superintelligence by 2028. The post begins by noting how accurate Kokotajlo's 2021 predictions were, then presents the team's forecast which includes an intelligence explosion in 2027, government involvement in AI companies, and potential scenarios ranging from misaligned AI to technofeudalism. Scott notes that while team members have varying timelines, they consider this an 80th percentile fast scenario that shouldn't be ruled out. Shorter summary
Enjoying this website? You can donate to support it! You can also check out my Book Translator tool.