How to explore Scott Alexander's work and his 1500+ blog posts? This unaffiliated fan website lets you sort and search through the whole codex. Enjoy!

See also Top Posts and All Tags.

Minutes:
Blog:
Year:
Show all filters
10 posts found
Jul 26, 2024
acx
62 min 8,560 words 565 comments 197 likes podcast (47 min)
The review analyzes Real Raw News, a popular conspiracy theory website, examining its content, appeal, and implications in the context of modern media consumption and AI technology. Longer summary
This book review analyzes the website Real Raw News, a popular source of conspiracy theories and fake news stories centered around Donald Trump and his alleged secret war against the 'Deep State'. The reviewer examines the site's content, its narrative techniques, and its appeal to believers, drawing parallels to comic book lore and discussing the psychological needs it fulfills. The review also considers the broader implications of such conspiracy theories in the age of AI-generated content. Shorter summary
Jun 14, 2024
acx
15 min 2,061 words 541 comments 255 likes podcast (13 min)
Scott Alexander attempts to replicate a poll claiming high rates of COVID vaccine deaths, finds much lower rates, and concludes such polls are unreliable due to bias. Longer summary
Scott Alexander attempts to replicate a poll claiming high rates of COVID vaccine-related deaths. He conducts his own survey and finds much lower rates, investigates possible reasons for the discrepancy, and concludes that such polls are unreliable due to political bias and statistical misunderstanding. Scott's survey shows 0.6% of respondents reporting a vaccine-related death in their family, compared to 8.5% in the original poll. He follows up with respondents who reported deaths, finding most cases involve elderly individuals, and the numbers are consistent with normal death rates. Shorter summary
May 08, 2024
acx
22 min 3,018 words 270 comments 96 likes podcast (17 min)
Scott Alexander analyzes California's AI regulation bill SB1047, finding it reasonably well-designed despite misrepresentations, and ultimately supporting it as a compromise between safety and innovation. Longer summary
Scott Alexander examines California's proposed AI regulation bill SB1047, which aims to regulate large AI models. He explains that contrary to some misrepresentations, the bill is reasonably well-designed, applying only to very large models and focusing on preventing catastrophic harms like creating weapons of mass destruction or major cyberattacks. Scott addresses various objections to the bill, dismissing some as based on misunderstandings while acknowledging other more legitimate concerns. He ultimately supports the bill, seeing it as a good compromise between safety and innovation, while urging readers to pay attention to the conversation and be wary of misrepresentations. Shorter summary
Dec 29, 2022
acx
36 min 4,909 words 838 comments 351 likes podcast (28 min)
Scott Alexander argues that even seemingly extreme media misinformation usually involves misleading presentation of true facts rather than outright fabrication, examining several reader-provided counterexamples. Longer summary
Scott Alexander responds to criticisms of his previous post about media rarely lying by examining several examples readers provided. He argues that even in extreme cases like Alex Jones' Sandy Hook conspiracy theories or claims about election fraud, media sources are typically highlighting true but misleading facts rather than outright fabricating information. Scott contends this matters because it means efforts to censor 'misinformation' will always require subjective judgment calls rather than being a straightforward process of removing falsehoods. He suggests people want to believe bad actors are doing something fundamentally different than good faith reasoning, but in reality most are just reasoning poorly under uncertainty. Shorter summary
Dec 22, 2022
acx
13 min 1,748 words 641 comments 517 likes podcast (12 min)
Scott Alexander argues that media rarely lies outright but often misleads through lack of context, making censorship of 'misinformation' problematic. Longer summary
Scott Alexander argues that media rarely lies explicitly, but instead misinforms through misinterpretation, lack of context, or selective reporting. He provides examples from both alternative (Infowars) and mainstream (New York Times) media to illustrate how technically true information can be presented in misleading ways. The post critiques the idea that censorship can easily distinguish between 'misinformation' and 'good information', arguing that determining necessary context is subjective and value-laden. Scott concludes that there isn't a clear line between misinformation and proper contextualization, making censorship inherently biased. Shorter summary
Aug 06, 2021
acx
41 min 5,613 words 406 comments 57 likes podcast (34 min)
Scott Alexander responds to comments on his AI risk post, discussing AI self-awareness, narrow vs. general AI, catastrophe probabilities, and research priorities. Longer summary
Scott Alexander responds to various comments on his original post about AI risk. He addresses topics such as the nature of self-awareness in AI, the distinction between narrow and general AI, probabilities of AI-related catastrophes, incentives for misinformation, arguments for AGI timelines, and the relationship between near-term and long-term AI research. Scott uses analogies and metaphors to illustrate complex ideas about AI development and potential risks. Shorter summary
Jan 20, 2016
ssc
8 min 1,072 words 286 comments
Scott Alexander criticizes websites that misleadingly suggest drug side effects by scraping FDA data, potentially causing patients to stop taking necessary medications. Longer summary
Scott Alexander criticizes websites like EHealthMe that automatically generate pages suggesting connections between drugs and side effects based on FDA data scraping. He argues these sites are misleading and potentially harmful, as they can cause patients to stop taking necessary medications due to unfounded fears of side effects. The post begins with a personal anecdote about a patient concerned about Xolair causing depression, then delves into how these websites operate and why their information is unreliable. Scott emphasizes the scummy nature of these practices and their potential to harm vulnerable individuals, concluding with a stark example of how such misinformation could lead to tragedy. Shorter summary
Feb 17, 2014
ssc
29 min 4,033 words 235 comments
Scott debunks viral misinformation about false rape accusation rates and provides more accurate estimates, while criticizing the spread of such inaccuracies in feminist circles. Longer summary
Scott Alexander critiques a viral Buzzfeed article that claims false rape accusations are extremely rare, showing how the article's statistics are severely flawed. He provides a more accurate analysis of false rape accusation rates, estimating they affect between 0.3% to 3% of men in their lifetimes. Scott expresses frustration at how readily such misinformation spreads in feminist circles and urges readers to be extremely skeptical of statistics from these sources. He concludes by discussing the difficulty of dealing with rape accusations given the significant rates of both rape and false accusations. Shorter summary
Jun 11, 2013
ssc
14 min 1,853 words 61 comments
Scott Alexander analyzes the misrepresentation of the Gilbert/Frago case in media and online discourse, showing how legal complexities were oversimplified. Longer summary
Scott Alexander examines the media coverage and online reactions to the Gilbert/Frago case, where a man was acquitted of murder after shooting an escort. He points out that many articles and blog posts misrepresented the case, claiming it set a precedent for legally killing sex workers in Texas. Scott presents a more nuanced view, citing legal experts who explain that the acquittal was likely due to lack of proof of intent to kill, rather than any judgment on the value of sex workers' lives. The post highlights the complexity of the legal issues involved and how they were oversimplified in much of the coverage. Shorter summary
Apr 04, 2013
ssc
9 min 1,124 words 32 comments
Scott Alexander debunks a misleading Facebook meme about Google autocomplete suggestions for disabled people, showing that Google's suggestions are universally negative for almost all groups. Longer summary
Scott Alexander critiques a viral Facebook meme about Google autocomplete suggestions for 'disabled people should...', finding it misleading. He demonstrates that Google's autocomplete suggestions are universally negative for almost all demographic groups, not just disabled people. Scott shows how many of the search results are actually denouncing the negative statements, not supporting them. He explores Google's autocomplete suggestions for various groups, finding they often suggest death or extermination, regardless of the group. The post ends with a humorous note about Google's 'Don't Be Evil' motto contrasting with these misanthropic autocomplete results. Shorter summary