How to avoid getting lost reading Scott Alexander and his 1500+ blog posts? This unaffiliated fan website lets you sort and search through the whole codex. Enjoy!

See also Top Posts and All Tags.

Minutes:
Blog:
Year:
Show all filters
5 posts found
Feb 13, 2024
acx
18 min 2,299 words 441 comments 245 likes podcast
Scott Alexander analyzes the astronomical costs and resources needed for future AI models, sparked by Sam Altman's reported $7 trillion fundraising goal. Longer summary
Scott Alexander discusses Sam Altman's reported plan to raise $7 trillion for AI development. He breaks down the potential costs of future GPT models, explaining how each generation requires exponentially more computing power, energy, and training data. The post explores the challenges of scaling AI, including the need for vast amounts of computing power, energy infrastructure, and training data that may not exist yet. Scott also considers the implications for AI safety and OpenAI's stance on responsible AI development. Shorter summary
Apr 25, 2023
acx
16 min 1,967 words 138 comments 61 likes podcast
The post explores AI forecasting capabilities, compares prediction market performances, and provides updates on various ongoing predictions in technology and politics. Longer summary
This post discusses recent developments in AI forecasting and prediction markets. It covers a study testing GPT-2's ability to predict past events, reports on Metaculus' accuracy compared to low-information priors and Manifold Markets, and updates on various prediction markets including those related to AI development, abortion medication, and Elon Musk's role at Twitter. The author also mentions new features in prediction platforms and research on forecasting methodologies. Shorter summary
Nov 28, 2022
acx
40 min 5,189 words 450 comments 107 likes podcast
Scott Alexander examines Redwood Research's attempt to create an AI that avoids generating violent content, using Alex Rider fanfiction as training data. Longer summary
Scott Alexander reviews Redwood Research's project to create an AI that can classify and avoid violent content in text completions, using Alex Rider fanfiction as training data. The project aimed to test whether AI alignment through reinforcement learning could work, but ultimately failed to create an unbeatable violence classifier. The article explores the challenges faced, the methods used, and the implications for broader AI alignment efforts. Shorter summary
Jun 10, 2022
acx
35 min 4,463 words 497 comments 107 likes podcast
Scott Alexander argues against Gary Marcus's critique of AI scaling, discussing the potential for future AI capabilities and the nature of human intelligence. Longer summary
Scott Alexander responds to Gary Marcus's critique of AI scaling, arguing that current AI limitations don't necessarily prove statistical AI is a dead end. He discusses the scaling hypothesis, compares AI development to human cognitive development, and suggests that 'world-modeling' may emerge from pattern-matching abilities rather than being a distinct, hard-coded function. Alexander also considers the potential capabilities of future AI systems, even if they don't achieve human-like general intelligence. Shorter summary
Jun 07, 2022
acx
30 min 3,787 words 457 comments 120 likes podcast
Scott Alexander bets that AI models will quickly overcome current limitations, based on how GPT-3 improved on GPT-2's shortcomings identified by Gary Marcus. Longer summary
Scott Alexander discusses his prediction that AI models will quickly overcome current limitations, using examples of how GPT-3 improved on GPT-2's shortcomings. He analyzes Gary Marcus's critiques of AI capabilities, showing how many issues Marcus pointed out with GPT-2 and GPT-3 were resolved in subsequent versions. While acknowledging Marcus's expertise, Scott argues that the pattern of AI rapidly improving suggests current flaws will likely be fixed soon, though this doesn't necessarily disprove Marcus's deeper concerns about AI's true intelligence. Shorter summary