Scott Alexander analyzes the surprisingly low existential risk estimates from a recent forecasting tournament, particularly for AI risk, and explains why he only partially updates his own higher estimates.
Longer summary
Scott Alexander discusses the Existential Risk Persuasion Tournament (XPT), which aimed to estimate risks of global catastrophes using experts and superforecasters. The results showed unexpectedly low probabilities for existential risks, particularly for AI. Scott examines possible reasons for these results, including incentive structures, participant expertise, and timing of the study. He ultimately decides to partially update his own estimates, but not fully to the level suggested by the tournament, explaining his reasoning for maintaining some disagreement with the experts.
Shorter summary