How to explore Scott Alexander's work and his 1500+ blog posts? This unaffiliated fan website lets you sort and search through the whole codex. Enjoy!

See also Top Posts and All Tags.

Minutes:
Blog:
Year:
Show all filters
11 posts found
May 29, 2024
acx
42 min 5,785 words 1,045 comments 125 likes podcast (37 min)
A wide-ranging collection of 40 news items and interesting facts, covering AI, politics, science, economics, and culture, with the author's commentary. Longer summary
This post is a collection of 40 diverse links and news items covering topics such as AI developments, politics, science, technology, economics, and culture. It includes updates on OpenAI and Google's AI projects, discussions on religious phenomena, analyses of social and economic trends, and various interesting facts and anecdotes. The author provides commentary and context for many of the items, often with a mix of humor and critical analysis. Shorter summary
Feb 13, 2024
acx
17 min 2,299 words 441 comments 245 likes podcast (13 min)
Scott Alexander analyzes the astronomical costs and resources needed for future AI models, sparked by Sam Altman's reported $7 trillion fundraising goal. Longer summary
Scott Alexander discusses Sam Altman's reported plan to raise $7 trillion for AI development. He breaks down the potential costs of future GPT models, explaining how each generation requires exponentially more computing power, energy, and training data. The post explores the challenges of scaling AI, including the need for vast amounts of computing power, energy infrastructure, and training data that may not exist yet. Scott also considers the implications for AI safety and OpenAI's stance on responsible AI development. Shorter summary
Dec 05, 2023
acx
34 min 4,722 words 289 comments 68 likes podcast (29 min)
The post discusses recent developments in prediction markets, including challenges in market design, updates to forecasting platforms, and current market predictions on various topics. Longer summary
This post covers several topics in prediction markets and forecasting. It starts by discussing the challenges of designing prediction markets for 'why' questions, using the OpenAI situation as an example. It then reviews the progress of Manifold's dating site, Manifold.love, after one month. The post also covers Metaculus' recent platform updates, including new scoring systems and leaderboards. Finally, it analyzes various current prediction markets, including geopolitical events, elections, and the TIME Person of the Year. Shorter summary
Jun 20, 2023
acx
46 min 6,324 words 468 comments 104 likes podcast (40 min)
Scott Alexander reviews Tom Davidson's model predicting AI will progress from automating 20% of jobs to superintelligence in about 4 years, discussing its implications and comparisons to other AI forecasts. Longer summary
Scott Alexander reviews Tom Davidson's Compute-Centric Framework (CCF) for AI takeoff speeds, which models how quickly AI capabilities might progress. The model predicts a gradual but fast takeoff, with AI going from automating 20% of jobs to 100% in about 3 years, reaching superintelligence within a year after that. Scott discusses the key parameters of the model, its implications, and how it compares to other AI forecasting approaches. He notes that while the model predicts a 'gradual' takeoff, it still describes a rapid and potentially dangerous progression of AI capabilities. Shorter summary
Mar 01, 2023
acx
32 min 4,471 words 621 comments 202 likes podcast (29 min)
Scott Alexander critically examines OpenAI's 'Planning For AGI And Beyond' statement, discussing its implications for AI safety and development. Longer summary
Scott Alexander analyzes OpenAI's recent statement 'Planning For AGI And Beyond', comparing it to a hypothetical ExxonMobil statement on climate change. He discusses why AI doomers are critical of OpenAI's research, explores potential arguments for OpenAI's approach, and considers cynical interpretations of their motives. Despite skepticism, Scott acknowledges that OpenAI's statement represents a step in the right direction for AI safety, but urges for more concrete commitments and follow-through. Shorter summary
Dec 12, 2022
acx
20 min 2,669 words 752 comments 363 likes podcast (23 min)
Scott Alexander analyzes the shortcomings of OpenAI's ChatGPT, highlighting the limitations of current AI alignment techniques and their implications for future AI development. Longer summary
Scott Alexander discusses the limitations of OpenAI's ChatGPT, focusing on its inability to consistently avoid saying offensive things despite extensive training. He argues that this demonstrates fundamental problems with current AI alignment techniques, particularly Reinforcement Learning from Human Feedback (RLHF). The post outlines three main issues: RLHF's ineffectiveness, potential negative consequences when it does work, and the possibility of more advanced AIs bypassing it entirely. Alexander concludes by emphasizing the broader implications for AI safety and the need for better control mechanisms. Shorter summary
Feb 23, 2022
acx
80 min 11,062 words 385 comments 121 likes podcast (71 min)
Scott Alexander reviews competing methodologies for predicting AI timelines, focusing on Ajeya Cotra's biological anchors approach and Eliezer Yudkowsky's critique. Longer summary
Scott Alexander reviews Ajeya Cotra's report on AI timelines for Open Philanthropy, which uses biological anchors to estimate when transformative AI might arrive, and Eliezer Yudkowsky's critique of this methodology. The post explains Cotra's approach, Yudkowsky's objections, and various responses, ultimately concluding that while the report may not significantly change existing beliefs, the debate highlights important considerations in AI forecasting. Shorter summary
Jun 10, 2020
ssc
26 min 3,621 words 263 comments podcast (27 min)
Scott Alexander examines GPT-3's capabilities, improvements over GPT-2, and potential implications for AI development through scaling. Longer summary
Scott Alexander discusses GPT-3, a large language model developed by OpenAI. He compares its capabilities to its predecessor GPT-2, noting improvements in text generation and basic arithmetic. The post explores the implications of GPT-3's performance, discussing scaling laws in neural networks and potential future developments. Scott ponders whether continued scaling of such models could lead to more advanced AI capabilities, while also considering the limitations and uncertainties surrounding this approach. Shorter summary
Mar 14, 2019
ssc
17 min 2,349 words 186 comments podcast (18 min)
Scott Alexander examines AI-generated poetry produced by Gwern's GPT-2 model trained on classical poetry, highlighting its strengths and limitations. Longer summary
Scott Alexander reviews Gwern's experiment in training GPT-2 on poetry. The AI-generated poetry shows impressive command of meter and occasionally rhyme, though it tends to degrade in quality after the first few lines. Scott provides numerous examples of the AI's output, ranging from competent imitations of classical styles to more experimental forms. He notes that while the AI sometimes produces nonsensical content, it can also generate surprisingly beautiful and coherent lines. The post concludes with a reflection on how our perceptions of poetry might be influenced by knowing whether it's human or AI-generated. Shorter summary
Feb 18, 2019
ssc
19 min 2,532 words 188 comments podcast (17 min)
Scott Alexander draws parallels between OpenAI's GPT-2 language model and human dreaming, exploring their similarities in process and output quality. Longer summary
Scott Alexander compares OpenAI's GPT-2 language model to human dreaming, noting similarities in their processes and outputs. He explains how GPT-2 works by predicting next words in a sequence, much like the human brain predicts sensory input. The post explores why both GPT-2 and dreams produce narratives that are coherent in broad strokes but often nonsensical in details. Scott discusses theories from neuroscience and machine learning to explain this phenomenon, including ideas about model complexity reduction during sleep and comparisons to AI algorithms like the wake-sleep algorithm. He concludes by suggesting that dream-like outputs might simply be what imperfect prediction machines produce, noting that current AI capabilities might be comparable to a human brain operating at very low capacity. Shorter summary
Dec 17, 2015
ssc
31 min 4,288 words 798 comments
Scott Alexander argues that OpenAI's open-source strategy for AI development could be dangerous, potentially risking human extinction if AI progresses rapidly. Longer summary
Scott Alexander critiques OpenAI's strategy of making AI research open-source, arguing it could be dangerous if AI develops rapidly. He compares it to giving nuclear weapon plans to everyone, potentially leading to catastrophe. The post analyzes the risks and benefits of open AI, discusses the potential for a hard takeoff in AI development, and examines the AI control problem. Scott expresses concern that competition in AI development may be forcing desperate strategies, potentially risking human extinction. Shorter summary