How to avoid getting lost reading Scott Alexander and his 1500+ blog posts? This unaffiliated fan website lets you sort and search through the whole codex. Enjoy!

See also Top Posts and All Tags.

Minutes:
Blog:
Year:
Show all filters
1 posts found
Apr 25, 2024
acx
20 min 2,537 words 912 comments 168 likes podcast
Scott Alexander dissects and criticizes a common argument against AI safety that compares it to past unfulfilled disaster predictions, finding it logically flawed and difficult to steelman. Longer summary
Scott Alexander analyzes a common argument against AI safety concerns, which compares them to past unfulfilled predictions of disaster (like a 'coffeepocalypse'). He finds this argument logically flawed and explores possible explanations for why people make it. Scott considers whether it's an attempt at an existence proof, a way to trigger heuristics, or a misunderstanding of how evidence works. He concludes that he still doesn't fully understand the mindset behind such arguments and invites readers to point out if he ever makes similar logical mistakes. Shorter summary