How to explore Scott Alexander's work and his 1500+ blog posts? This unaffiliated fan website lets you sort and search through the whole codex. Enjoy!

See also Top Posts and All Tags.

Minutes:
Blog:
Year:
Show all filters
4 posts found
Sep 17, 2024
acx
19 min 2,607 words 170 comments 78 likes podcast (18 min)
Scott examines a new AI forecaster, discusses Polymarket's success, and reviews recent developments in prediction markets and forecasting. Longer summary
This post discusses recent developments in AI forecasting and prediction markets. It starts by examining FiveThirtyNine, a new AI forecaster claiming to be superintelligent, but finds its performance questionable. The post then briefly mentions r/MarkMyWords, a subreddit for bold predictions. It goes on to discuss Polymarket's recent success, particularly in betting on the 2024 US presidential election. The post concludes with a roundup of interesting prediction markets and forecasting-related news, including political betting controversies in the UK and updates on the Kalshi vs. CFTC legal battle. Shorter summary
Feb 23, 2022
acx
80 min 11,062 words 385 comments 121 likes podcast (71 min)
Scott Alexander reviews competing methodologies for predicting AI timelines, focusing on Ajeya Cotra's biological anchors approach and Eliezer Yudkowsky's critique. Longer summary
Scott Alexander reviews Ajeya Cotra's report on AI timelines for Open Philanthropy, which uses biological anchors to estimate when transformative AI might arrive, and Eliezer Yudkowsky's critique of this methodology. The post explains Cotra's approach, Yudkowsky's objections, and various responses, ultimately concluding that while the report may not significantly change existing beliefs, the debate highlights important considerations in AI forecasting. Shorter summary
Jun 10, 2020
ssc
26 min 3,621 words 263 comments podcast (27 min)
Scott Alexander examines GPT-3's capabilities, improvements over GPT-2, and potential implications for AI development through scaling. Longer summary
Scott Alexander discusses GPT-3, a large language model developed by OpenAI. He compares its capabilities to its predecessor GPT-2, noting improvements in text generation and basic arithmetic. The post explores the implications of GPT-3's performance, discussing scaling laws in neural networks and potential future developments. Scott ponders whether continued scaling of such models could lead to more advanced AI capabilities, while also considering the limitations and uncertainties surrounding this approach. Shorter summary
Jun 08, 2017
ssc
18 min 2,467 words 286 comments
Scott analyzes a new survey of AI researchers, showing diverse opinions on AI timelines and risks, with many acknowledging potential dangers but few prioritizing safety research. Longer summary
This post discusses a recent survey of AI researchers about their opinions on AI progress and potential risks. The survey, conducted by Grace et al., shows a wide range of predictions about when human-level AI might be achieved, with significant uncertainty among experts. The post highlights that while many AI researchers acknowledge potential risks from poorly-aligned AI, few consider it among the most important problems in the field. Scott compares these results to a previous survey by Muller and Bostrom, noting some differences in methodology and results. He concludes by expressing encouragement that researchers are taking AI safety arguments seriously, while also pointing out a potential disconnect between acknowledging risks and prioritizing work on them. Shorter summary