How to avoid getting lost reading Scott Alexander and his 1500+ blog posts? This unaffiliated fan website lets you sort and search through the whole codex. Enjoy!

See also Top Posts and All Tags.

Minutes:
Blog:
Year:
Show all filters
3 posts found
Jul 20, 2023
acx
33 min 4,172 words 519 comments 138 likes podcast
Scott Alexander analyzes the surprisingly low existential risk estimates from a recent forecasting tournament, particularly for AI risk, and explains why he only partially updates his own higher estimates. Longer summary
Scott Alexander discusses the Existential Risk Persuasion Tournament (XPT), which aimed to estimate risks of global catastrophes using experts and superforecasters. The results showed unexpectedly low probabilities for existential risks, particularly for AI. Scott examines possible reasons for these results, including incentive structures, participant expertise, and timing of the study. He ultimately decides to partially update his own estimates, but not fully to the level suggested by the tournament, explaining his reasoning for maintaining some disagreement with the experts. Shorter summary
Feb 12, 2020
ssc
5 min 531 words 115 comments podcast
Scott Alexander proposes that confirmation bias might be a misapplication of normal Bayesian reasoning rather than a separate cognitive phenomenon. Longer summary
Scott Alexander discusses confirmation bias, suggesting it might not be a separate phenomenon from normal reasoning but rather a misapplication of Bayesian reasoning. He uses an example of believing a friend who reports seeing a coyote in Berkeley but disbelieving the same friend reporting a polar bear. Scott argues this is similar to how we process information that confirms or challenges our existing beliefs. He proposes that when faced with evidence contradicting strong priors, we should slightly adjust our beliefs while heavily discounting the new evidence. The post critiques an evolutionary psychology explanation of confirmation bias from a Fast Company article, suggesting instead that confirmation bias might be a result of normal reasoning processes gone awry rather than a distinct cognitive bias. Shorter summary
Aug 06, 2013
ssc
17 min 2,110 words 137 comments podcast
Scott Alexander defends Bayesianism as a valuable epistemology, contrasting it with absolutist and nihilistic approaches, and argues for its usefulness in complex reasoning. Longer summary
Scott Alexander responds to David Chapman's criticism of 'Bayesianism' as a philosophy. He argues that Bayesianism is a genuine and valuable epistemology, contrasting it with two other approaches: Aristotelianism (which deals in absolutes) and Anton-Wilsonism (which advocates not believing anything). Scott posits that Bayesianism, or 'Epistemology X', is a synthesis of these, allowing for degrees of belief and updating based on evidence. He defends this view by sharing personal experiences and observations, arguing that while people may not always think in probabilities, having a coherent philosophical foundation like Bayesianism is valuable when dealing with complex issues outside one's comfort zone. Shorter summary