How to avoid getting lost reading Scott Alexander and his 1500+ blog posts? This unaffiliated fan website lets you sort and search through the whole codex. Enjoy!

See also Top Posts and All Tags.

Minutes:
Blog:
Year:
Show all filters
5 posts found
Feb 15, 2023
acx
29 min 3,698 words 534 comments 189 likes podcast
Scott clarifies his stance on conspiracy theories and expert trust, advocating for a nuanced approach that acknowledges both the value of expert opinion and the potential for misrepresentation. Longer summary
Scott revisits his previous post on fideism, addressing criticism and clarifying his stance on conspiracy theories and trusting experts. He presents three perspectives on conspiracy theories: Idiocy, Intellect, and Infohazard, and argues for a nuanced approach. Scott emphasizes that conspiracy theories can be convincing even to smart people, and that completely avoiding discussion of them is not always effective. He stresses the importance of trusting experts while also being aware of potential biases and misrepresentations. The post concludes with detailed advice on how to approach conspiracy theories and maintain a balanced perspective. Shorter summary
Apr 17, 2017
ssc
47 min 6,075 words 609 comments podcast
Scott Alexander examines his evolving view on scientific consensus, realizing it's more reliable and self-correcting than he previously thought. Longer summary
Scott Alexander reflects on his changing perspective towards scientific consensus, sharing personal experiences where he initially believed he was defying consensus but later discovered that the scientific community was often ahead of or aligned with his views. He discusses examples from various fields including the replication crisis, nutrition science, social justice issues, and AI risk. Alexander concludes that scientific consensus, while not perfect, is remarkably effective and trustworthy, often self-correcting within a decade of new evidence emerging. Shorter summary
Aug 09, 2015
ssc
27 min 3,495 words 424 comments podcast
Scott explores the nature of scientific contrarianism, discussing how ideas spread through the scientific community and the challenges faced by both crackpots and legitimate contrarians. Longer summary
This post discusses the concept of contrarians and crackpots in science, exploring how ideas move through different levels of the scientific community. Scott examines cases like Gary Taubes and the serotonin theory of depression to illustrate how scientific consensus can differ at various levels. He proposes a pyramid model of scientific knowledge dissemination and discusses how contrarians might be skipping levels in this pyramid. The post then contrasts virtuous contrarians with crackpots, noting that the former often face indifference rather than opposition. Scott concludes by discussing paradigm shifts in science and how even correct contrarians often lose credit for their ideas. Shorter summary
May 22, 2015
ssc
43 min 5,524 words 517 comments podcast
Scott Alexander provides evidence that many prominent AI researchers are concerned about AI risk, contrary to claims in some popular articles. Longer summary
Scott Alexander responds to articles claiming that AI researchers are not concerned about AI risk by providing a list of prominent AI researchers who have expressed concerns about the potential risks of advanced AI. He argues that there isn't a clear divide between 'skeptics' and 'believers', but rather a general consensus that some preliminary work on AI safety is needed. The post highlights that the main differences lie in the timeline for AI development and when preparations should begin, not whether the risks are real. Shorter summary
Jul 02, 2014
ssc
14 min 1,707 words 362 comments podcast
Scott Alexander estimates the frequency of significant scientific failures to evaluate the plausibility of climate change skepticism. Longer summary
Scott Alexander explores the frequency of scientific failures to assess the likelihood of climate change skepticism being correct. He defines criteria for significant scientific failures and identifies three clear examples: Lysenkoism, Freudian psychoanalysis, and behaviorism in psychology. After estimating the total number of possible scientific paradigms, he calculates a failure rate of about 1.2-3.6%. He concludes that this low failure rate doesn't provide much support for climate change skepticism, as it's similar to the proportion of papers that don't support anthropogenic climate change. Shorter summary