How to explore Scott Alexander's work and his 1500+ blog posts? This unaffiliated fan website lets you sort and search through the whole codex. Enjoy!

See also Top Posts and All Tags.

Minutes:
Blog:
Year:
Show all filters
2 posts found
Jul 28, 2021
acx
14 min 1,852 words 267 comments 54 likes podcast (13 min)
Scott Alexander analyzes and criticizes arguments that claim worrying about one issue trades off against worrying about another, particularly in the context of AI risks. Longer summary
Scott Alexander critiques an argument that worrying about long-term AI risks trades off against worrying about near-term AI risks. He explores similar arguments in other domains, finding them generally unconvincing. He proposes a model where topics can be complements or substitutes, but struggles to find real-world examples of substitutes outside of political point-scoring. Scott suggests the argument might make more sense for funding allocation, but points out that long-term AI risk funding likely wouldn't be redirected to near-term AI concerns if discontinued. He concludes that such arguments about AI may persist due to a misunderstanding of the funding landscape, and suggests better communication about AI funding sources could help resolve the issue. Shorter summary
Jul 27, 2021
acx
17 min 2,322 words 441 comments 126 likes podcast (19 min)
Scott Alexander critiques Daron Acemoglu's Washington Post article on AI risks, highlighting flawed logic and unsupported claims about AI's current impacts. Longer summary
Scott Alexander critiques an article by Daron Acemoglu in the Washington Post about AI risks. He identifies the main flaw as Acemoglu's argument that because AI is dangerous now, it can't be dangerous in the future. Scott argues this logic is flawed and that present and future AI risks are not mutually exclusive. He also criticizes Acemoglu's claims about AI's current negative impacts, particularly on employment, as not well-supported by evidence. Scott discusses the challenges of evaluating new technologies' impacts and argues that superintelligent AI poses unique risks different from narrow AI. He concludes by criticizing the tendency of respected figures to dismiss AI risk concerns without proper engagement with the arguments. Shorter summary