How to explore Scott Alexander's work and his 1500+ blog posts? This unaffiliated fan website lets you sort and search through the whole codex. Enjoy!

See also Top Posts and All Tags.

Minutes:
Blog:
Year:
Show all filters
7 posts found
Sep 18, 2024
acx
19 min 2,577 words 612 comments 325 likes podcast (18 min)
Scott Alexander examines how AI achievements, once considered markers of true intelligence or danger, are often dismissed as unimpressive, potentially leading to concerning AI behaviors being normalized. Longer summary
Scott Alexander discusses recent developments in AI, focusing on two AI systems: Sakana, an 'AI scientist' that can write computer science papers, and Strawberry, an AI that demonstrated hacking abilities. He uses these examples to explore the broader theme of how our perception of AI intelligence and danger has evolved. The post argues that as AI achieves various milestones once thought to indicate true intelligence or danger, humans tend to dismiss these achievements as unimpressive or non-threatening. This pattern leads to a situation where potentially concerning AI behaviors might be normalized and not taken seriously as indicators of real risk. Shorter summary
Sep 12, 2022
acx
11 min 1,502 words 313 comments 159 likes podcast (13 min)
Scott Alexander won his three-year bet on AI progress in image generation compositionality within three months, demonstrating unexpectedly rapid AI advancement. Longer summary
Scott Alexander discusses his recent bet about AI image models' ability to handle compositionality, which he won much sooner than expected. He explains the concept of compositionality in AI image generation, showing examples of DALL-E2's limitations. Scott then describes the bet he made, predicting improvement by 2025. However, newer AI models like Google's Imagen achieved the required level of compositionality within just three months. This rapid progress supports the idea that AI development is advancing faster than many predict, and that scaling and normal progress can solve even complex problems in AI. Shorter summary
Jun 07, 2022
acx
28 min 3,787 words 457 comments 120 likes podcast (26 min)
Scott Alexander bets that AI models will quickly overcome current limitations, based on how GPT-3 improved on GPT-2's shortcomings identified by Gary Marcus. Longer summary
Scott Alexander discusses his prediction that AI models will quickly overcome current limitations, using examples of how GPT-3 improved on GPT-2's shortcomings. He analyzes Gary Marcus's critiques of AI capabilities, showing how many issues Marcus pointed out with GPT-2 and GPT-3 were resolved in subsequent versions. While acknowledging Marcus's expertise, Scott argues that the pattern of AI rapidly improving suggests current flaws will likely be fixed soon, though this doesn't necessarily disprove Marcus's deeper concerns about AI's true intelligence. Shorter summary
Apr 18, 2022
acx
16 min 2,226 words 463 comments 45 likes podcast (21 min)
Scott reviews recent changes in prediction markets covering the Ukraine war, nuclear risk, AI development, and other current events. Longer summary
This post covers several prediction markets and forecasts related to current events. It discusses changes in Ukraine war predictions, nuclear risk estimates, AI development timelines, and other topics like Elon Musk's Twitter acquisition and the French presidential election. Scott analyzes discrepancies between different forecasts and markets, and explores potential reasons for changes in predictions. Shorter summary
Mar 21, 2022
acx
17 min 2,344 words 103 comments 29 likes podcast (22 min)
Scott Alexander updates readers on Ukraine war predictions, Insight Prediction's challenges, ACX 2022 Prediction Contest results, and various prediction market developments. Longer summary
Scott Alexander provides an update on prediction markets related to the Ukraine war, discusses the situation with Insight Prediction (a prediction market platform), shares data from the ACX 2022 Prediction Contest, and gives brief updates on various prediction-related topics. The post covers changes in probabilities for key Ukraine war outcomes, the challenges faced by Insight Prediction due to the war, analysis of the ACX prediction contest data, and mentions of new prediction market platforms and AI-related predictions. Shorter summary
Aug 02, 2017
ssc
19 min 2,607 words 275 comments
Scott Alexander explores theories to reconcile contradictory views on AI progress rates, considering the implications for AI development timelines and intelligence scaling. Longer summary
Scott Alexander discusses the apparent contradiction between Eliezer Yudkowsky's argument that AI progress will be rapid once it reaches human level, and Katja Grace's data showing gradual AI improvement across human-level tasks. He explores several theories to reconcile this, including mutational load, purpose-built hardware, varying sub-abilities, and the possibility that human intelligence variation is actually vast compared to other animals. The post ends by considering implications for AI development timelines and potential rapid scaling of intelligence. Shorter summary
Jan 11, 2017
ssc
8 min 1,067 words 350 comments
Scott Alexander warns against forming strong heuristics based on limited data, using examples from AI research, elections, and campaign strategies to illustrate the pitfalls of this approach. Longer summary
Scott Alexander discusses the dangers of forming strong heuristics based on limited data points. He presents three examples: AI research progress, election predictions, and campaign strategies. In each case, he shows how people formed confident heuristics after observing patterns in just one or two instances, only to be surprised when these heuristics failed. The post argues against treating life events as moral parables and instead advocates for viewing them as individual data points that may not necessarily generalize. Scott uses a mix of statistical reasoning, historical examples, and cultural references to illustrate his points. Shorter summary