Scott Alexander explores theories to reconcile contradictory views on AI progress rates, considering the implications for AI development timelines and intelligence scaling.
Longer summary
Scott Alexander discusses the apparent contradiction between Eliezer Yudkowsky's argument that AI progress will be rapid once it reaches human level, and Katja Grace's data showing gradual AI improvement across human-level tasks. He explores several theories to reconcile this, including mutational load, purpose-built hardware, varying sub-abilities, and the possibility that human intelligence variation is actually vast compared to other animals. The post ends by considering implications for AI development timelines and potential rapid scaling of intelligence.
Shorter summary