Scott Alexander argues against Gary Marcus's critique of AI scaling, discussing the potential for future AI capabilities and the nature of human intelligence.
Longer summary
Scott Alexander responds to Gary Marcus's critique of AI scaling, arguing that current AI limitations don't necessarily prove statistical AI is a dead end. He discusses the scaling hypothesis, compares AI development to human cognitive development, and suggests that 'world-modeling' may emerge from pattern-matching abilities rather than being a distinct, hard-coded function. Alexander also considers the potential capabilities of future AI systems, even if they don't achieve human-like general intelligence.
Shorter summary