Scott Alexander analyzes and criticizes arguments that claim worrying about one issue trades off against worrying about another, particularly in the context of AI risks.
Longer summary
Scott Alexander critiques an argument that worrying about long-term AI risks trades off against worrying about near-term AI risks. He explores similar arguments in other domains, finding them generally unconvincing. He proposes a model where topics can be complements or substitutes, but struggles to find real-world examples of substitutes outside of political point-scoring. Scott suggests the argument might make more sense for funding allocation, but points out that long-term AI risk funding likely wouldn't be redirected to near-term AI concerns if discontinued. He concludes that such arguments about AI may persist due to a misunderstanding of the funding landscape, and suggests better communication about AI funding sources could help resolve the issue.
Shorter summary