How to avoid getting lost reading Scott Alexander and his 1500+ blog posts? This unaffiliated fan website lets you sort and search through the whole codex. Enjoy!

See also Top Posts and All Tags.

Minutes:
Blog:
Year:
Show all filters
1 posts found
Mar 30, 2023
acx
16 min 2,048 words 1,126 comments 278 likes podcast
Scott Alexander critiques Tyler Cowen's use of the 'Safe Uncertainty Fallacy' in discussing AI risk, arguing that uncertainty doesn't justify complacency. Longer summary
Scott Alexander critiques Tyler Cowen's use of the 'Safe Uncertainty Fallacy' in relation to AI risk. This fallacy argues that because a situation is completely uncertain, it will be fine. Scott explains why this reasoning is flawed, using examples like the printing press and alien starships to illustrate his points. He argues that even in uncertain situations, we need to make best guesses and not default to assuming everything will be fine. Scott criticizes Cowen's lack of specific probability estimates and argues that claiming total uncertainty is intellectually dishonest. The post ends with a satirical twist on Cowen's conclusion about society being designed to 'take the plunge' with new technologies. Shorter summary