How to avoid getting lost reading Scott Alexander and his 1500+ blog posts? This unaffiliated fan website lets you sort and search through the whole codex. Enjoy!

See also Top Posts and All Tags.

Minutes:
Blog:
Year:
Show all filters
9 posts found
Jul 26, 2024
acx
66 min 8,560 words 565 comments 197 likes podcast
The review analyzes Real Raw News, a popular conspiracy theory website, examining its content, appeal, and implications in the context of modern media consumption and AI technology. Longer summary
This book review analyzes the website Real Raw News, a popular source of conspiracy theories and fake news stories centered around Donald Trump and his alleged secret war against the 'Deep State'. The reviewer examines the site's content, its narrative techniques, and its appeal to believers, drawing parallels to comic book lore and discussing the psychological needs it fulfills. The review also considers the broader implications of such conspiracy theories in the age of AI-generated content. Shorter summary
Jun 14, 2024
acx
16 min 2,061 words 541 comments 255 likes podcast
Scott Alexander attempts to replicate a poll claiming high rates of COVID vaccine deaths, finds much lower rates, and concludes such polls are unreliable due to bias. Longer summary
Scott Alexander attempts to replicate a poll claiming high rates of COVID vaccine-related deaths. He conducts his own survey and finds much lower rates, investigates possible reasons for the discrepancy, and concludes that such polls are unreliable due to political bias and statistical misunderstanding. Scott's survey shows 0.6% of respondents reporting a vaccine-related death in their family, compared to 8.5% in the original poll. He follows up with respondents who reported deaths, finding most cases involve elderly individuals, and the numbers are consistent with normal death rates. Shorter summary
May 08, 2024
acx
24 min 3,018 words 270 comments 96 likes podcast
Scott Alexander analyzes California's AI regulation bill SB1047, finding it reasonably well-designed despite misrepresentations, and ultimately supporting it as a compromise between safety and innovation. Longer summary
Scott Alexander examines California's proposed AI regulation bill SB1047, which aims to regulate large AI models. He explains that contrary to some misrepresentations, the bill is reasonably well-designed, applying only to very large models and focusing on preventing catastrophic harms like creating weapons of mass destruction or major cyberattacks. Scott addresses various objections to the bill, dismissing some as based on misunderstandings while acknowledging other more legitimate concerns. He ultimately supports the bill, seeing it as a good compromise between safety and innovation, while urging readers to pay attention to the conversation and be wary of misrepresentations. Shorter summary
Dec 22, 2022
acx
14 min 1,748 words 641 comments 517 likes podcast
Scott Alexander argues that media rarely lies outright but often misleads through lack of context, making censorship of 'misinformation' problematic. Longer summary
Scott Alexander argues that media rarely lies explicitly, but instead misinforms through misinterpretation, lack of context, or selective reporting. He provides examples from both alternative (Infowars) and mainstream (New York Times) media to illustrate how technically true information can be presented in misleading ways. The post critiques the idea that censorship can easily distinguish between 'misinformation' and 'good information', arguing that determining necessary context is subjective and value-laden. Scott concludes that there isn't a clear line between misinformation and proper contextualization, making censorship inherently biased. Shorter summary
Aug 06, 2021
acx
44 min 5,613 words 406 comments 57 likes podcast
Scott Alexander responds to comments on his AI risk post, discussing AI self-awareness, narrow vs. general AI, catastrophe probabilities, and research priorities. Longer summary
Scott Alexander responds to various comments on his original post about AI risk. He addresses topics such as the nature of self-awareness in AI, the distinction between narrow and general AI, probabilities of AI-related catastrophes, incentives for misinformation, arguments for AGI timelines, and the relationship between near-term and long-term AI research. Scott uses analogies and metaphors to illustrate complex ideas about AI development and potential risks. Shorter summary
Jan 20, 2016
ssc
9 min 1,072 words 286 comments podcast
Scott Alexander criticizes websites that misleadingly suggest drug side effects by scraping FDA data, potentially causing patients to stop taking necessary medications. Longer summary
Scott Alexander criticizes websites like EHealthMe that automatically generate pages suggesting connections between drugs and side effects based on FDA data scraping. He argues these sites are misleading and potentially harmful, as they can cause patients to stop taking necessary medications due to unfounded fears of side effects. The post begins with a personal anecdote about a patient concerned about Xolair causing depression, then delves into how these websites operate and why their information is unreliable. Scott emphasizes the scummy nature of these practices and their potential to harm vulnerable individuals, concluding with a stark example of how such misinformation could lead to tragedy. Shorter summary
Feb 17, 2014
ssc
32 min 4,033 words 235 comments podcast
Scott debunks viral misinformation about false rape accusation rates and provides more accurate estimates, while criticizing the spread of such inaccuracies in feminist circles. Longer summary
Scott Alexander critiques a viral Buzzfeed article that claims false rape accusations are extremely rare, showing how the article's statistics are severely flawed. He provides a more accurate analysis of false rape accusation rates, estimating they affect between 0.3% to 3% of men in their lifetimes. Scott expresses frustration at how readily such misinformation spreads in feminist circles and urges readers to be extremely skeptical of statistics from these sources. He concludes by discussing the difficulty of dealing with rape accusations given the significant rates of both rape and false accusations. Shorter summary
Jun 11, 2013
ssc
15 min 1,853 words 61 comments podcast
Scott Alexander analyzes the misrepresentation of the Gilbert/Frago case in media and online discourse, showing how legal complexities were oversimplified. Longer summary
Scott Alexander examines the media coverage and online reactions to the Gilbert/Frago case, where a man was acquitted of murder after shooting an escort. He points out that many articles and blog posts misrepresented the case, claiming it set a precedent for legally killing sex workers in Texas. Scott presents a more nuanced view, citing legal experts who explain that the acquittal was likely due to lack of proof of intent to kill, rather than any judgment on the value of sex workers' lives. The post highlights the complexity of the legal issues involved and how they were oversimplified in much of the coverage. Shorter summary
Apr 04, 2013
ssc
9 min 1,124 words 32 comments podcast
Scott Alexander debunks a misleading Facebook meme about Google autocomplete suggestions for disabled people, showing that Google's suggestions are universally negative for almost all groups. Longer summary
Scott Alexander critiques a viral Facebook meme about Google autocomplete suggestions for 'disabled people should...', finding it misleading. He demonstrates that Google's autocomplete suggestions are universally negative for almost all demographic groups, not just disabled people. Scott shows how many of the search results are actually denouncing the negative statements, not supporting them. He explores Google's autocomplete suggestions for various groups, finding they often suggest death or extermination, regardless of the group. The post ends with a humorous note about Google's 'Don't Be Evil' motto contrasting with these misanthropic autocomplete results. Shorter summary