Scott Alexander dissects and criticizes a common argument against AI safety that compares it to past unfulfilled disaster predictions, finding it logically flawed and difficult to steelman.
Longer summary
Scott Alexander analyzes a common argument against AI safety concerns, which compares them to past unfulfilled predictions of disaster (like a 'coffeepocalypse'). He finds this argument logically flawed and explores possible explanations for why people make it. Scott considers whether it's an attempt at an existence proof, a way to trigger heuristics, or a misunderstanding of how evidence works. He concludes that he still doesn't fully understand the mindset behind such arguments and invites readers to point out if he ever makes similar logical mistakes.
Shorter summary