How to avoid getting lost reading Scott Alexander and his 1500+ blog posts? This unaffiliated fan website lets you sort and search through the whole codex. Enjoy!

See also Top Posts and All Tags.

Minutes:
Blog:
Year:
Show all filters
14 posts found
Oct 05, 2023
acx
45 min 5,791 words 499 comments 94 likes podcast
Scott Alexander reviews a debate on AI development pauses, discussing various strategies and their potential impacts on AI safety and progress. Longer summary
Scott Alexander summarizes a debate on pausing AI development, outlining five main strategies discussed: Simple Pause, Surgical Pause, Regulatory Pause, Total Stop, and No Pause. He explains the arguments for and against each approach, including considerations like compute overhang, international competition, and the potential for regulatory overreach. The post also covers additional perspectives from debate participants and Scott's own thoughts on the feasibility and implications of various pause strategies. Shorter summary
Aug 28, 2023
acx
21 min 2,669 words 240 comments 72 likes podcast
Scott Alexander analyzes recent developments in prediction markets, including the LK-99 superconductor hype, a forecasting tournament, and new prediction market concepts. Longer summary
This post discusses several topics related to prediction markets and forecasting. It begins with an analysis of the LK-99 superconductor prediction markets, suggesting they were overly optimistic due to media hype rather than expert opinion. The author then discusses the Salem/CSPI prediction market tournament, highlighting how the top performers used various strategies beyond just predictive accuracy. The post introduces the concept of prediction portfolios, which allow betting on broader trends rather than specific outcomes. It also mentions a new prediction market for flight delays called Wingman.WTF. Finally, the post reviews current political prediction markets and other forecasting-related news. Shorter summary
Jul 28, 2023
acx
12 min 1,550 words 754 comments 292 likes podcast
Scott Alexander argues that misusing terms like 'democratic' and 'accountable' can inadvertently justify totalitarianism, and suggests more careful usage of these terms. Longer summary
Scott Alexander critiques the misuse of terms like 'democratic' and 'accountable', arguing that when taken to extremes, they can justify totalitarianism. He illustrates this through examples in religious freedom, charitable donations, and AI development, showing how demands for complete 'democracy' or 'accountability' in all aspects of life can lead to the erosion of personal freedoms. The post suggests that these terms should be used more carefully, with 'democratic' applied mainly to government structures and 'accountable' reserved for specific power dynamics, to avoid inadvertently promoting totalitarian ideas. Shorter summary
Jul 17, 2023
acx
25 min 3,140 words 435 comments 190 likes podcast
Scott Alexander critiques Elon Musk's xAI alignment strategy of creating a 'maximally curious' AI, arguing it's both unfeasible and potentially dangerous. Longer summary
Scott Alexander critiques Elon Musk's alignment strategy for xAI, which aims to create a 'maximally curious' AI. He argues that this approach is both unfeasible and potentially dangerous. Scott points out that a curious AI might not prioritize human welfare and could lead to unintended consequences. He also explains that current AI technology cannot reliably implement such specific goals. The post suggests that focusing on getting AIs to follow orders reliably should be the priority, rather than deciding on a single guiding principle now. Scott appreciates Musk's intention to avoid programming specific morality into AI but believes the proposed solution is flawed. Shorter summary
Jun 20, 2023
acx
49 min 6,324 words 468 comments 104 likes podcast
Scott Alexander reviews Tom Davidson's model predicting AI will progress from automating 20% of jobs to superintelligence in about 4 years, discussing its implications and comparisons to other AI forecasts. Longer summary
Scott Alexander reviews Tom Davidson's Compute-Centric Framework (CCF) for AI takeoff speeds, which models how quickly AI capabilities might progress. The model predicts a gradual but fast takeoff, with AI going from automating 20% of jobs to 100% in about 3 years, reaching superintelligence within a year after that. Scott discusses the key parameters of the model, its implications, and how it compares to other AI forecasting approaches. He notes that while the model predicts a 'gradual' takeoff, it still describes a rapid and potentially dangerous progression of AI capabilities. Shorter summary
Apr 05, 2023
acx
14 min 1,784 words 569 comments 255 likes podcast
Scott Alexander challenges the idea of an 'AI race', comparing AI to other transformative technologies and discussing scenarios where the race concept might apply. Longer summary
Scott Alexander argues against the notion of an 'AI race' between countries, suggesting that most technologies, including potentially AI, are not truly races with clear winners. He compares AI to other transformative technologies like electricity, automobiles, and computers, which didn't significantly alter global power balances. The post explains that the concept of an 'AI race' mainly makes sense in two scenarios: the need to align AI before it becomes potentially destructive, or in a 'hard takeoff' scenario where AI rapidly self-improves. Scott criticizes those who simultaneously dismiss alignment concerns while emphasizing the need to 'win' the AI race. He also discusses post-singularity scenarios, arguing that many current concerns would likely become irrelevant in such a radically transformed world. Shorter summary
Mar 14, 2023
acx
33 min 4,264 words 617 comments 206 likes podcast
Scott Alexander examines optimistic and pessimistic scenarios for AI risk, weighing the potential for intermediate AIs to help solve alignment against the threat of deceptive 'sleeper agent' AIs. Longer summary
Scott Alexander discusses the varying estimates of AI extinction risk among experts and presents his own perspective, balancing optimistic and pessimistic scenarios. He argues that intermediate AIs could help solve alignment problems before a world-killing AI emerges, but also considers the possibility of 'sleeper agent' AIs that pretend to be aligned while waiting for an opportunity to act against human interests. The post explores key assumptions that differentiate optimistic and pessimistic views on AI risk, including AI coherence, cooperation, alignment solvability, superweapon feasibility, and the nature of AI progress. Shorter summary
Feb 20, 2023
acx
58 min 7,468 words 483 comments 142 likes podcast
Scott Alexander grades his 2018 predictions for 2023 and makes new predictions for 2028, with a strong focus on AI developments. Longer summary
Scott Alexander reviews his predictions from 2018 for 2023, grading himself on accuracy across various domains including AI, world affairs, US culture and politics, economics, science/technology, and existential risks. He then offers new predictions for 2028, focusing heavily on AI developments and their potential impacts on society, economics, and politics. Shorter summary
Aug 08, 2022
acx
24 min 3,004 words 643 comments 176 likes podcast
Scott examines why the AI safety community isn't more actively opposing AI development, exploring the complex dynamics between AI capabilities and safety efforts. Longer summary
Scott Alexander discusses the complex relationship between AI capabilities research and AI safety efforts, exploring why the AI safety community is not more actively opposing AI development. He explains how major AI companies were founded by safety-conscious individuals, the risks of a 'race dynamic' in AI development, and the challenges of regulating AI globally. The post concludes that the current cooperation between AI capabilities companies and the alignment community may be the best strategy, despite its imperfections. Shorter summary
Jun 13, 2022
acx
15 min 1,857 words 154 comments 50 likes podcast
Scott Alexander reviews recent predictions and forecasts on topics including monkeypox, the Ukraine war, AI development, and US politics. Longer summary
This Mantic Monday post covers several topics in prediction markets and forecasting, including monkeypox predictions, updates on the Ukraine-Russia war, AI development forecasts, Elon Musk's potential Twitter acquisition, and various political predictions. Scott discusses Metaculus predictions for monkeypox cases, analyzes forecasts for the Ukraine conflict, examines AI capability predictions in response to a bet between Elon Musk and Gary Marcus, and reviews predictions for US elections and other current events. The post also includes updates on prediction market platforms and recent articles about forecasting. Shorter summary
May 23, 2022
acx
8 min 939 words 194 comments 74 likes podcast
Scott Alexander explores parallels between human willpower and potential AI development, suggesting future AIs might experience weakness of will similar to humans. Longer summary
Scott Alexander explores the concept of willpower in humans and AI, drawing parallels between evolutionary drives and AI training. He suggests that both humans and future AIs might experience a struggle between instinctual drives and higher-level planning modules. The post discusses how evolution has instilled basic drives in animals, which then developed their own ways to satisfy these drives. Similarly, AI training might first develop 'instinctual' responses before evolving more complex planning abilities. Scott posits that this could lead to AIs experiencing weakness of will, contradicting the common narrative of hyper-focused AIs in discussions of AI risk. He also touches on the nature of consciousness and agency, questioning whether the 'I' of willpower is the same as the 'I' of conscious access. Shorter summary
Jun 10, 2020
ssc
28 min 3,621 words 263 comments podcast
Scott Alexander examines GPT-3's capabilities, improvements over GPT-2, and potential implications for AI development through scaling. Longer summary
Scott Alexander discusses GPT-3, a large language model developed by OpenAI. He compares its capabilities to its predecessor GPT-2, noting improvements in text generation and basic arithmetic. The post explores the implications of GPT-3's performance, discussing scaling laws in neural networks and potential future developments. Scott ponders whether continued scaling of such models could lead to more advanced AI capabilities, while also considering the limitations and uncertainties surrounding this approach. Shorter summary
Mar 25, 2019
ssc
10 min 1,291 words 139 comments podcast
The post examines the relationship between neuron count and intelligence across species, challenging traditional brain size measures and exploring implications for AI development. Longer summary
This post discusses the relationship between brain size, neuron count, and intelligence across different species. It challenges traditional measures like absolute brain size and encephalization quotient, focusing instead on the number of cortical neurons as a key factor in intelligence. The post highlights birds as an example, explaining how their dense neuron packing allows them to achieve primate-level intelligence with much smaller brains. The author then explores the implications of this for understanding intelligence and its potential impact on AI development, suggesting that AI capabilities might scale linearly with computing power. The post ends with a humorous reference to pilot whales, which have more cortical neurons than humans but aren't known for higher intelligence. Shorter summary
Feb 06, 2017
ssc
23 min 2,929 words 480 comments podcast
Scott Alexander shares his experiences and insights from the Asilomar Conference on Beneficial AI, covering various aspects of AI development, risks, and ethical considerations. Longer summary
Scott Alexander recounts his experience at the Asilomar Conference on Beneficial AI. The conference brought together diverse experts to discuss AI risks, from technological unemployment to superintelligence. Key points include: the normalization of AI safety research, economists' views on technological unemployment, proposed solutions like retraining workers, advances in AI goal alignment research, improvements in AlphaGo and its implications, issues with AI transparency, political considerations in AI development, and debates on ethical AI principles. Scott notes the star-studded attendance and the surreal experience of discussing crucial AI topics with leading experts in the field. Shorter summary