How to explore Scott Alexander's work and his 1500+ blog posts? This unaffiliated fan website lets you sort and search through the whole codex. Enjoy!

See also Top Posts and All Tags.

Minutes:
Blog:
Year:
Show all filters
4 posts found
Sep 28, 2022
acx
29 min 4,017 words 598 comments 269 likes podcast (29 min)
Scott Alexander examines how public predictions are judged over time, using examples like Nostradamus and Fukuyama to illustrate common pitfalls and offer advice on making predictions. Longer summary
Scott Alexander explores the phenomenon of how people's predictions are judged over time, using Nostradamus and Francis Fukuyama as contrasting examples. He discusses how Nostradamus's vague prophecies are often interpreted as accurate in hindsight, while Fukuyama's 'end of history' thesis is frequently declared wrong whenever significant events occur. The post then analyzes other public figures' predictions and their reception, before offering advice on how to make predictions that won't damage one's credibility or cause personal misery. Scott concludes by acknowledging that he wants to make more predictions himself, while warning aspiring thought leaders about the challenges of public prediction-making. Shorter summary
Sep 12, 2022
acx
11 min 1,502 words 313 comments 159 likes podcast (13 min)
Scott Alexander won his three-year bet on AI progress in image generation compositionality within three months, demonstrating unexpectedly rapid AI advancement. Longer summary
Scott Alexander discusses his recent bet about AI image models' ability to handle compositionality, which he won much sooner than expected. He explains the concept of compositionality in AI image generation, showing examples of DALL-E2's limitations. Scott then describes the bet he made, predicting improvement by 2025. However, newer AI models like Google's Imagen achieved the required level of compositionality within just three months. This rapid progress supports the idea that AI development is advancing faster than many predict, and that scaling and normal progress can solve even complex problems in AI. Shorter summary
Jun 10, 2022
acx
32 min 4,463 words 497 comments 107 likes podcast (33 min)
Scott Alexander argues against Gary Marcus's critique of AI scaling, discussing the potential for future AI capabilities and the nature of human intelligence. Longer summary
Scott Alexander responds to Gary Marcus's critique of AI scaling, arguing that current AI limitations don't necessarily prove statistical AI is a dead end. He discusses the scaling hypothesis, compares AI development to human cognitive development, and suggests that 'world-modeling' may emerge from pattern-matching abilities rather than being a distinct, hard-coded function. Alexander also considers the potential capabilities of future AI systems, even if they don't achieve human-like general intelligence. Shorter summary
Jun 07, 2022
acx
28 min 3,787 words 457 comments 120 likes podcast (26 min)
Scott Alexander bets that AI models will quickly overcome current limitations, based on how GPT-3 improved on GPT-2's shortcomings identified by Gary Marcus. Longer summary
Scott Alexander discusses his prediction that AI models will quickly overcome current limitations, using examples of how GPT-3 improved on GPT-2's shortcomings. He analyzes Gary Marcus's critiques of AI capabilities, showing how many issues Marcus pointed out with GPT-2 and GPT-3 were resolved in subsequent versions. While acknowledging Marcus's expertise, Scott argues that the pattern of AI rapidly improving suggests current flaws will likely be fixed soon, though this doesn't necessarily disprove Marcus's deeper concerns about AI's true intelligence. Shorter summary