How to avoid getting lost reading Scott Alexander and his 1500+ blog posts? This unaffiliated fan website lets you sort and search through the whole codex. Enjoy!

See also Top Posts and All Tags.

Minutes:
Blog:
Year:
Show all filters
4 posts found
Apr 17, 2023
acx
62 min 8,045 words 127 comments 58 likes podcast
Scott Alexander summarizes and responds to comments on his book review about IRBs, covering various perspectives on research regulation. Longer summary
Scott Alexander summarizes key comments on his book review of 'From Oversight to Overkill' about IRBs (Institutional Review Boards). The post covers various perspectives on IRBs and research regulations, including stories from researchers, comparisons to other industries, discussions on regulation and liability, debates on act vs. omission distinctions, potential applications to AI governance, and other miscellaneous observations. Scott provides additional context and his own thoughts on many of the comments. Shorter summary
Mar 01, 2023
acx
35 min 4,471 words 621 comments 202 likes podcast
Scott Alexander critically examines OpenAI's 'Planning For AGI And Beyond' statement, discussing its implications for AI safety and development. Longer summary
Scott Alexander analyzes OpenAI's recent statement 'Planning For AGI And Beyond', comparing it to a hypothetical ExxonMobil statement on climate change. He discusses why AI doomers are critical of OpenAI's research, explores potential arguments for OpenAI's approach, and considers cynical interpretations of their motives. Despite skepticism, Scott acknowledges that OpenAI's statement represents a step in the right direction for AI safety, but urges for more concrete commitments and follow-through. Shorter summary
Aug 08, 2022
acx
24 min 3,004 words 643 comments 176 likes podcast
Scott examines why the AI safety community isn't more actively opposing AI development, exploring the complex dynamics between AI capabilities and safety efforts. Longer summary
Scott Alexander discusses the complex relationship between AI capabilities research and AI safety efforts, exploring why the AI safety community is not more actively opposing AI development. He explains how major AI companies were founded by safety-conscious individuals, the risks of a 'race dynamic' in AI development, and the challenges of regulating AI globally. The post concludes that the current cooperation between AI capabilities companies and the alignment community may be the best strategy, despite its imperfections. Shorter summary
Jul 30, 2021
acx
11 min 1,318 words 243 comments 38 likes podcast
Scott Alexander discusses a new expert survey on long-term AI risks, highlighting the diverse scenarios considered and the lack of consensus on specific threats. Longer summary
Scott Alexander discusses a new expert survey on long-term AI risks, conducted by Carlier, Clarke, and Schuett. Unlike previous surveys, this one focuses on people already working in AI safety and governance. The survey found a median ~10% chance of AI-related catastrophe, with individual estimates ranging from 0.1% to 100%. The survey explored six different scenarios for how AI could go wrong, including superintelligence, influence-seeking behavior, Goodharting, AI-related war, misuse by bad actors, and other possibilities. Surprisingly, all scenarios were rated as roughly equally likely, with 'other' being slightly higher. Scott notes three key takeaways: the relatively low probability assigned to unaligned AI causing extinction, the diversification of concerns beyond just superintelligence, and the lack of a unified picture of what might go wrong among experts in the field. Shorter summary