How to avoid getting lost reading Scott Alexander and his 1500+ blog posts? This unaffiliated fan website lets you sort and search through the whole codex. Enjoy!

See also Top Posts and All Tags.

Minutes:
Blog:
Year:
Show all filters
3 posts found
Mar 01, 2023
acx
35 min 4,471 words 621 comments 202 likes podcast
Scott Alexander critically examines OpenAI's 'Planning For AGI And Beyond' statement, discussing its implications for AI safety and development. Longer summary
Scott Alexander analyzes OpenAI's recent statement 'Planning For AGI And Beyond', comparing it to a hypothetical ExxonMobil statement on climate change. He discusses why AI doomers are critical of OpenAI's research, explores potential arguments for OpenAI's approach, and considers cynical interpretations of their motives. Despite skepticism, Scott acknowledges that OpenAI's statement represents a step in the right direction for AI safety, but urges for more concrete commitments and follow-through. Shorter summary
Aug 06, 2021
acx
44 min 5,613 words 406 comments 57 likes podcast
Scott Alexander responds to comments on his AI risk post, discussing AI self-awareness, narrow vs. general AI, catastrophe probabilities, and research priorities. Longer summary
Scott Alexander responds to various comments on his original post about AI risk. He addresses topics such as the nature of self-awareness in AI, the distinction between narrow and general AI, probabilities of AI-related catastrophes, incentives for misinformation, arguments for AGI timelines, and the relationship between near-term and long-term AI research. Scott uses analogies and metaphors to illustrate complex ideas about AI development and potential risks. Shorter summary
Feb 08, 2021
acx
17 min 2,165 words 104 comments 62 likes podcast
Scott Alexander examines prediction markets, focusing on Metaculus forecasts for AI development and its potential impacts. Longer summary
Scott Alexander reviews several prediction markets, focusing on Polymarket, Kalshi, and Metaculus. He discusses the challenges of using Polymarket and the potential of Kalshi as a regulated futures exchange. The bulk of the post analyzes Metaculus predictions on AI-related topics, including the timeline for AGI development, human-machine intelligence parity, economic impacts of AI, and specific AI achievements. Scott notes the wide range of predictions and the interesting ways prediction markets can quantify expert opinions on complex topics. Shorter summary