How to explore Scott Alexander's work and his 1500+ blog posts? This unaffiliated fan website lets you sort and search through the whole codex. Enjoy!

See also Top Posts and All Tags.

Minutes:
Blog:
Year:
Show all filters
11 posts found
Sep 18, 2024
acx
19 min 2,577 words 612 comments 325 likes podcast (18 min)
Scott Alexander examines how AI achievements, once considered markers of true intelligence or danger, are often dismissed as unimpressive, potentially leading to concerning AI behaviors being normalized. Longer summary
Scott Alexander discusses recent developments in AI, focusing on two AI systems: Sakana, an 'AI scientist' that can write computer science papers, and Strawberry, an AI that demonstrated hacking abilities. He uses these examples to explore the broader theme of how our perception of AI intelligence and danger has evolved. The post argues that as AI achieves various milestones once thought to indicate true intelligence or danger, humans tend to dismiss these achievements as unimpressive or non-threatening. This pattern leads to a situation where potentially concerning AI behaviors might be normalized and not taken seriously as indicators of real risk. Shorter summary
Sep 19, 2022
acx
18 min 2,451 words 73 comments 109 likes podcast (27 min)
Scott Alexander discusses Janus' experiments with GPT-3, exploring its capabilities, quirks, and potential implications. Longer summary
This post discusses Janus' work with GPT-3, exploring its capabilities and quirks. It covers how GPT-3 can generate self-aware stories, the differences between older and newer versions of the model, its tendency to fixate on certain responses, and some amusing experiments. The post highlights the balance between creativity and efficiency in AI language models, and touches on the potential implications of AI development. Shorter summary
Jul 26, 2022
acx
47 min 6,446 words 298 comments 107 likes podcast (42 min)
Scott Alexander examines the Eliciting Latent Knowledge (ELK) problem in AI alignment and various proposed solutions. Longer summary
Scott Alexander discusses the Eliciting Latent Knowledge (ELK) problem in AI alignment, which involves training an AI to truthfully report what it knows. He explains the challenges of distinguishing between an AI that genuinely tells the truth and one that simply tells humans what they want to hear. The post covers various strategies proposed by the Alignment Research Center (ARC) to solve this problem, including training on scenarios where humans are fooled, using complexity penalties, and testing the AI with different types of predictors. Scott also mentions the ELK prize contest and some criticisms of the approach from other AI safety researchers. Shorter summary
Jun 07, 2022
acx
28 min 3,787 words 457 comments 120 likes podcast (26 min)
Scott Alexander bets that AI models will quickly overcome current limitations, based on how GPT-3 improved on GPT-2's shortcomings identified by Gary Marcus. Longer summary
Scott Alexander discusses his prediction that AI models will quickly overcome current limitations, using examples of how GPT-3 improved on GPT-2's shortcomings. He analyzes Gary Marcus's critiques of AI capabilities, showing how many issues Marcus pointed out with GPT-2 and GPT-3 were resolved in subsequent versions. While acknowledging Marcus's expertise, Scott argues that the pattern of AI rapidly improving suggests current flaws will likely be fixed soon, though this doesn't necessarily disprove Marcus's deeper concerns about AI's true intelligence. Shorter summary
May 30, 2022
acx
32 min 4,371 words 305 comments 234 likes podcast (38 min)
Scott Alexander experiments with DALL-E 2 to create stained glass window designs, exploring the AI's capabilities and limitations in interpreting complex prompts. Longer summary
Scott Alexander explores the challenges and quirks of using DALL-E 2, an AI art generator, to create stained glass window designs depicting the Virtues of Rationality. He details his attempts to generate images for different virtues, discussing the AI's strengths, limitations, and unexpected behaviors. The post analyzes how DALL-E interprets prompts, handles historical figures and concepts, and struggles with combining specific subjects and styles. Scott concludes that while DALL-E is capable of impressive work, it currently has difficulties with unusual requests and maintaining consistent styles across multiple images. Shorter summary
Apr 04, 2022
acx
61 min 8,451 words 611 comments 83 likes podcast (63 min)
Scott Alexander summarizes a debate between Yudkowsky and Christiano on whether AI progress will be gradual or sudden, exploring their key arguments and implications. Longer summary
This post summarizes a debate between Eliezer Yudkowsky and Paul Christiano on AI takeoff speeds. Christiano argues for a gradual takeoff where AI capabilities increase smoothly, while Yudkowsky predicts a sudden, discontinuous jump to superintelligence. The post explores their key arguments, including historical analogies, the nature of intelligence and recursive self-improvement, and how to measure AI progress. It concludes that while forecasters slightly favor Christiano's view, both scenarios present significant risks that are worth preparing for. Shorter summary
Jun 10, 2020
ssc
26 min 3,621 words 263 comments podcast (27 min)
Scott Alexander examines GPT-3's capabilities, improvements over GPT-2, and potential implications for AI development through scaling. Longer summary
Scott Alexander discusses GPT-3, a large language model developed by OpenAI. He compares its capabilities to its predecessor GPT-2, noting improvements in text generation and basic arithmetic. The post explores the implications of GPT-3's performance, discussing scaling laws in neural networks and potential future developments. Scott ponders whether continued scaling of such models could lead to more advanced AI capabilities, while also considering the limitations and uncertainties surrounding this approach. Shorter summary
Jan 06, 2020
ssc
10 min 1,343 words 182 comments podcast (10 min)
Scott Alexander plays chess against GPT-2, an AI language model, and discusses the broader implications of AI's ability to perform diverse tasks without specific training. Longer summary
Scott Alexander describes a chess game he played against GPT-2, an AI language model not designed for chess. Despite neither player performing well, GPT-2 managed to play a decent game without any understanding of chess or spatial concepts. The post then discusses the work of Gwern Branwen and Shawn Presser in training GPT-2 to play chess, showing its ability to learn opening theory and play reasonably well for several moves. Scott reflects on the implications of an AI designed for text prediction being able to perform tasks like writing poetry, composing music, and playing chess without being specifically designed for them. Shorter summary
Feb 19, 2019
ssc
25 min 3,491 words 262 comments podcast (28 min)
Scott Alexander explores GPT-2's unexpected capabilities and argues that it demonstrates the potential for AI to develop abilities beyond its explicit programming, challenging skepticism about AGI. Longer summary
This post discusses GPT-2, a language model AI, and its implications for artificial general intelligence (AGI). Scott Alexander argues that while GPT-2 is not AGI, it demonstrates unexpected capabilities that arise from its training in language prediction. He compares GPT-2's learning process to human creativity and understanding, suggesting that both rely on pattern recognition and recombination of existing information. The post explores examples of GPT-2's abilities, such as rudimentary counting, acronym creation, and translation, which were not explicitly programmed. Alexander concludes that while GPT-2 is far from true AGI, it shows that AI can develop unexpected capabilities, challenging the notion that AGI is impossible or unrelated to current AI work. Shorter summary
Feb 19, 2018
ssc
49 min 6,782 words 523 comments podcast (56 min)
Scott Alexander examines evidence for technological unemployment, finding little current impact but signs of 'technological underemployment' pushing workers to lower-skill jobs. Longer summary
Scott Alexander examines the arguments for and against technological unemployment, analyzing labor force participation rates, manufacturing job losses, and economic data to determine if automation is currently causing significant job displacement. He concludes that while there's little evidence of technological unemployment happening right now, there are signs of 'technological underemployment' where automation is pushing workers from middle-skill to lower-skill jobs. The long-term impacts remain uncertain, with economists divided on whether this is a temporary adjustment or a new normal. Shorter summary
Apr 07, 2015
ssc
13 min 1,783 words 489 comments
Scott Alexander refutes the idea that an AI without a physical body couldn't impact the real world, presenting various scenarios where it could gain power and influence. Longer summary
Scott Alexander argues against the notion that an AI confined to computers couldn't affect the physical world. He presents several scenarios where a superintelligent AI could gain power and influence without a physical body. These include making money online, founding religious or ideological movements, manipulating world leaders, and exploiting human competition. Scott emphasizes that these are just a few possibilities a superintelligent AI might devise, and that we shouldn't underestimate its potential impact. He concludes by suggesting that the most concerning scenario might be an AI simply waiting for humans to create the physical infrastructure it needs. Shorter summary