How to avoid getting lost reading Scott Alexander and his 1500+ blog posts? This unaffiliated fan website lets you sort and search through the whole codex. Enjoy!

See also Top Posts and All Tags.

Minutes:
Blog:
Year:
Show all filters
4 posts found
Feb 08, 2022
acx
18 min 2,258 words 338 comments 346 likes podcast
Scott Alexander discusses the dangers of relying on 'Heuristics That Almost Always Work' through various examples, highlighting their limitations and potential consequences. Longer summary
Scott Alexander explores the concept of 'Heuristics That Almost Always Work' through various examples, such as a security guard, doctor, futurist, skeptic, interviewer, queen, and weatherman. He argues that while these heuristics are correct 99.9% of the time, they provide no real value and could be replaced by a rock with a simple message. The post highlights the dangers of relying too heavily on such heuristics, including wasted resources on experts, false confidence, and the potential for catastrophic failures when the rare exceptions occur. Scott concludes by noting that those who dismiss rationality often rely on these heuristics themselves, and emphasizes the importance of being aware of the 0.1% of cases where the heuristics fail. Shorter summary
Apr 14, 2020
ssc
33 min 4,215 words 863 comments podcast
Scott Alexander argues that the media's failure in coronavirus coverage was not about prediction, but about poor probabilistic reasoning and decision-making under uncertainty. Longer summary
This post discusses the media's failure in covering the coronavirus pandemic, arguing that the issue was not primarily one of prediction but of probabilistic reasoning and decision-making under uncertainty. Scott Alexander argues that while predicting the exact course of the pandemic was difficult, the media and experts failed to properly convey and act on the potential risks even when the probability seemed low. He contrasts this with examples of good reasoning from individuals who took the threat seriously early on, not because they were certain it would be catastrophic, but because they understood the importance of preparing for low-probability, high-impact events. Shorter summary
Jul 12, 2018
ssc
7 min 846 words 42 comments podcast
Scott Alexander examines and expresses skepticism about a theory that attributes high-dose melatonin supplements to patent avoidance, while reflecting on the challenges of evaluating such claims. Longer summary
Scott Alexander discusses a theory about why melatonin supplements are often sold in doses much higher than recommended. The theory, proposed by Dr. Richard Wurtman, suggests that supplement manufacturers used higher doses to avoid paying royalties on a patent for lower doses. Scott expresses skepticism about this explanation, citing reasons such as the unusualness of patenting only up to a certain dose and the fact that many supplements are sold in high doses. He also notes that some companies do sell melatonin at the recommended dose without legal issues. Scott reflects on the challenges of evaluating such claims, balancing expert knowledge against rational skepticism. An update clarifies that the patent likely influenced initial supplement production but has since expired, though high-dose traditions persist. Shorter summary
Jun 08, 2017
ssc
19 min 2,467 words 286 comments podcast
Scott analyzes a new survey of AI researchers, showing diverse opinions on AI timelines and risks, with many acknowledging potential dangers but few prioritizing safety research. Longer summary
This post discusses a recent survey of AI researchers about their opinions on AI progress and potential risks. The survey, conducted by Grace et al., shows a wide range of predictions about when human-level AI might be achieved, with significant uncertainty among experts. The post highlights that while many AI researchers acknowledge potential risks from poorly-aligned AI, few consider it among the most important problems in the field. Scott compares these results to a previous survey by Muller and Bostrom, noting some differences in methodology and results. He concludes by expressing encouragement that researchers are taking AI safety arguments seriously, while also pointing out a potential disconnect between acknowledging risks and prioritizing work on them. Shorter summary