How to explore Scott Alexander's work and his 1500+ blog posts? This unaffiliated fan website lets you sort and search through the whole codex. Enjoy!

See also Top Posts and All Tags.

Minutes:
Blog:
Year:
Show all filters
3 posts found
Oct 10, 2024
acx
48 min 6,632 words 459 comments 177 likes podcast (43 min)
Scott Alexander discusses the political battle over California's AI safety bill SB 1047, its veto by Governor Newsom, and the implications for future AI regulation efforts. Longer summary
This post recounts the story behind SB 1047, a California bill aimed at regulating AI safety that was passed by the legislature but vetoed by Governor Newsom. Scott discusses the bill's supporters and opponents, the political maneuvering involved, and the aftermath of the veto. He analyzes the reasons for the veto, suggesting it was influenced by Silicon Valley donors and interests. The post also explores potential future strategies for AI regulation advocates, including possible alliances with left-wing groups. Scott concludes with reasons for optimism despite the setback, noting growing public support for AI regulation. Shorter summary
Jun 26, 2023
acx
5 min 588 words 151 comments 70 likes podcast (39 min)
Scott Alexander summarizes the AI-focused issue of Asterisk Magazine, highlighting key articles on AI forecasting, testing, and impacts. Longer summary
Scott Alexander presents an overview of the latest issue of Asterisk Magazine, which focuses on AI. He highlights several articles, including his own piece on forecasting AI progress, interviews with experts on AI testing and China's AI situation, discussions on the future of microchips and AI's impact on economic growth, and various other pieces on AI safety, regulation, and related topics. The post also mentions non-AI articles and congratulates the Asterisk team on their work. Shorter summary
Aug 08, 2022
acx
22 min 3,004 words 643 comments 176 likes podcast (22 min)
Scott examines why the AI safety community isn't more actively opposing AI development, exploring the complex dynamics between AI capabilities and safety efforts. Longer summary
Scott Alexander discusses the complex relationship between AI capabilities research and AI safety efforts, exploring why the AI safety community is not more actively opposing AI development. He explains how major AI companies were founded by safety-conscious individuals, the risks of a 'race dynamic' in AI development, and the challenges of regulating AI globally. The post concludes that the current cooperation between AI capabilities companies and the alignment community may be the best strategy, despite its imperfections. Shorter summary