Scott Alexander critiques Daron Acemoglu's Washington Post article on AI risks, highlighting flawed logic and unsupported claims about AI's current impacts.
Longer summary
Scott Alexander critiques an article by Daron Acemoglu in the Washington Post about AI risks. He identifies the main flaw as Acemoglu's argument that because AI is dangerous now, it can't be dangerous in the future. Scott argues this logic is flawed and that present and future AI risks are not mutually exclusive. He also criticizes Acemoglu's claims about AI's current negative impacts, particularly on employment, as not well-supported by evidence. Scott discusses the challenges of evaluating new technologies' impacts and argues that superintelligent AI poses unique risks different from narrow AI. He concludes by criticizing the tendency of respected figures to dismiss AI risk concerns without proper engagement with the arguments.
Shorter summary