How to explore Scott Alexander's work and his 1500+ blog posts? This unaffiliated fan website lets you sort and search through the whole codex. Enjoy!

See also Top Posts and All Tags.

Minutes:
Blog:
Year:
Show all filters
85 posts found
Nov 01, 2024
acx
46 min 6,321 words Comments pending podcast (41 min)
A diverse collection of news items, studies, and interesting facts from November 2024, covering topics from scientific discoveries to cultural phenomena. Longer summary
This post is a compilation of various news items, studies, and interesting facts from November 2024. It covers a wide range of topics including scientific discoveries, political events, technological advancements, cultural phenomena, and historical anecdotes. The post is structured as a numbered list, with each item briefly summarizing a piece of news or information. Some notable items include a new schizophrenia drug approval, YouGov polling results on various historical figures, findings on genetic IQ changes over time, and updates on AI technology and its implications. Shorter summary
Oct 24, 2024
acx
40 min 5,495 words Comments pending podcast (35 min)
Scott Alexander reports on the Progress Studies Conference, discussing advances in energy, AI, housing policy, and other fields, with an overall optimistic tone about recent developments and potential for future progress. Longer summary
Scott Alexander recounts his experience at the Progress Studies Conference, which focused on various technological and social advancements. The conference covered topics such as energy (particularly solar and nuclear power), politics and regulation, AI, YIMBY movement, self-driving cars, and other areas of potential progress. The overall mood was optimistic about recent developments in these fields, with many speakers highlighting how regulatory changes could accelerate progress. Scott notes that while the conference itself may not be directly responsible for recent advances, it reflects a growing awareness of the importance of technological progress and the need to address regulatory barriers. Shorter summary
Sep 18, 2024
acx
19 min 2,577 words 612 comments 325 likes podcast (18 min)
Scott Alexander examines how AI achievements, once considered markers of true intelligence or danger, are often dismissed as unimpressive, potentially leading to concerning AI behaviors being normalized. Longer summary
Scott Alexander discusses recent developments in AI, focusing on two AI systems: Sakana, an 'AI scientist' that can write computer science papers, and Strawberry, an AI that demonstrated hacking abilities. He uses these examples to explore the broader theme of how our perception of AI intelligence and danger has evolved. The post argues that as AI achieves various milestones once thought to indicate true intelligence or danger, humans tend to dismiss these achievements as unimpressive or non-threatening. This pattern leads to a situation where potentially concerning AI behaviors might be normalized and not taken seriously as indicators of real risk. Shorter summary
Sep 12, 2024
acx
35 min 4,864 words 1,216 comments 211 likes podcast (29 min)
A compilation of 50 diverse news items and links from September 2024, covering topics from politics and economics to science and technology. Longer summary
This post is a compilation of 50 diverse links and news items from September 2024. It covers a wide range of topics including politics, science, technology, economics, and social issues. Some notable items include discussions on kidney donation compensation, personality genetics, fertility rates in Israel, drug decriminalization in Oregon, AI regulation in California, and various political and economic analyses. The post also includes commentary on recent studies, books, and social phenomena. Shorter summary
Jul 26, 2024
acx
62 min 8,560 words 565 comments 197 likes podcast (47 min)
The review analyzes Real Raw News, a popular conspiracy theory website, examining its content, appeal, and implications in the context of modern media consumption and AI technology. Longer summary
This book review analyzes the website Real Raw News, a popular source of conspiracy theories and fake news stories centered around Donald Trump and his alleged secret war against the 'Deep State'. The reviewer examines the site's content, its narrative techniques, and its appeal to believers, drawing parallels to comic book lore and discussing the psychological needs it fulfills. The review also considers the broader implications of such conspiracy theories in the age of AI-generated content. Shorter summary
Jul 24, 2024
acx
23 min 3,151 words 1,000 comments 180 likes podcast (20 min)
Scott Alexander presents a diverse collection of 41 links and news items, covering topics from science and politics to history and technology, with brief commentaries and insights. Longer summary
This post is a collection of 41 diverse links and news items covering a wide range of topics. The items include scientific studies, political developments, historical anecdotes, technological advancements, and cultural phenomena. Scott Alexander presents these with brief commentaries, often adding his own insights or expressing skepticism about certain claims. The tone is informative and occasionally humorous, with Scott pointing out interesting connections or implications of the information presented. Shorter summary
Jul 19, 2024
acx
85 min 11,862 words 677 comments 211 likes podcast (79 min)
The review examines Daniel Everett's 'How Language Began', which challenges Chomsky's linguistic theories and proposes an alternative view of language as a gradual cultural invention. Longer summary
This book review discusses Daniel Everett's 'How Language Began', which challenges Noam Chomsky's dominant theories in linguistics. Everett argues that language emerged gradually over a long period, is primarily for communication, and is not innate but a cultural invention. The review contrasts Everett's views with Chomsky's, detailing Everett's research with the Pirahã people and his alternative theory of language origins. It also touches on the controversy Everett's work has sparked in linguistics and its potential implications for understanding language and AI. Shorter summary
May 29, 2024
acx
42 min 5,785 words 1,045 comments 125 likes podcast (37 min)
A wide-ranging collection of 40 news items and interesting facts, covering AI, politics, science, economics, and culture, with the author's commentary. Longer summary
This post is a collection of 40 diverse links and news items covering topics such as AI developments, politics, science, technology, economics, and culture. It includes updates on OpenAI and Google's AI projects, discussions on religious phenomena, analyses of social and economic trends, and various interesting facts and anecdotes. The author provides commentary and context for many of the items, often with a mix of humor and critical analysis. Shorter summary
May 08, 2024
acx
22 min 3,018 words 270 comments 96 likes podcast (17 min)
Scott Alexander analyzes California's AI regulation bill SB1047, finding it reasonably well-designed despite misrepresentations, and ultimately supporting it as a compromise between safety and innovation. Longer summary
Scott Alexander examines California's proposed AI regulation bill SB1047, which aims to regulate large AI models. He explains that contrary to some misrepresentations, the bill is reasonably well-designed, applying only to very large models and focusing on preventing catastrophic harms like creating weapons of mass destruction or major cyberattacks. Scott addresses various objections to the bill, dismissing some as based on misunderstandings while acknowledging other more legitimate concerns. He ultimately supports the bill, seeing it as a good compromise between safety and innovation, while urging readers to pay attention to the conversation and be wary of misrepresentations. Shorter summary
Apr 04, 2024
acx
22 min 3,057 words 806 comments 127 likes podcast (18 min)
Scott Alexander presents a curated list of 31 diverse links and news items, covering topics from politics and science to cultural phenomena and AI developments. Longer summary
This post is a collection of 31 diverse links and news items curated by Scott Alexander. The topics range from unusual political titles and daylight savings time to AI-generated music albums and controversial scientific claims. Scott covers various subjects including community building, political disputes, cultural phenomena, and scientific developments. The post maintains a mix of serious commentary and humorous observations, touching on topics like the ACLU's labor dispute, a full-scale Tower of Babel replica, and claims about increasing IQ. It also includes updates from ACX grantees and discussions on urban planning and noise pollution. Shorter summary
Mar 21, 2024
acx
25 min 3,434 words 505 comments 241 likes podcast (19 min)
Scott Alexander defends using probabilities for hard-to-model events, arguing they aid clear communication and decision-making even in uncertain domains. Longer summary
Scott Alexander defends the use of non-frequentist probabilities for hard-to-model, non-repeating events. He argues that probabilities are linguistically convenient, don't necessarily describe one's level of information, and can be valuable when provided by expert forecasters. Scott counters claims that probabilities are used as a substitute for reasoning and addresses objections about applying probabilities to complex topics like AI. He emphasizes that probabilities are useful tools for clear communication and decision-making, even in uncertain domains. Shorter summary
Mar 12, 2024
acx
29 min 4,038 words 177 comments 67 likes podcast (29 min)
The post explores recent advances in AI forecasting, discusses the concept of 'rationality engines', reviews a study on AI risk predictions, and provides updates on various prediction markets. Longer summary
This post discusses recent developments in AI-powered forecasting and prediction markets. It covers two academic teams' work on AI forecasting systems, comparing their performance to human forecasters. The post then discusses the potential for developing 'rationality engines' that can answer non-forecasting questions. It also reviews a study on superforecasters' predictions about AI risk, and provides updates on various prediction markets including political events, cryptocurrency, and global conflicts. The post concludes with short links to related articles and developments in the field of forecasting. Shorter summary
Feb 29, 2024
acx
36 min 4,992 words 585 comments 115 likes podcast (32 min)
A compilation of 55 diverse links and brief commentaries on various topics, including scientific studies, historical anecdotes, cultural phenomena, and technological developments. Longer summary
This post is a collection of 55 diverse links and brief commentaries on various topics, ranging from scientific studies and historical anecdotes to cultural phenomena and technological developments. The author covers subjects such as economic miracles, AI developments, religious trends, and social issues. Many items include claims or studies that the author finds interesting or surprising, often with a skeptical or analytical perspective. The post also includes updates on previous topics discussed by the author and references to other bloggers or researchers in the rationalist and effective altruism communities. Shorter summary
Feb 20, 2024
acx
28 min 3,822 words 137 comments 67 likes podcast (26 min)
Scott Alexander explores recent advancements in AI-powered prediction markets, including bot-based systems, AI forecasters, and their potential impact on future predictions and decision-making. Longer summary
This post discusses recent developments in prediction markets and AI forecasting. It covers Manifold's bot-based prediction markets, FutureSearch's AI forecasting system, Vitalik Buterin's thoughts on AI and crypto in prediction markets, a Manifold promotional event called 'Bet On Love', and various current market predictions on topics like AI capabilities and political events. The post also includes short links to related articles and forecasting resources. Shorter summary
Feb 13, 2024
acx
17 min 2,299 words 441 comments 245 likes podcast (13 min)
Scott Alexander analyzes the astronomical costs and resources needed for future AI models, sparked by Sam Altman's reported $7 trillion fundraising goal. Longer summary
Scott Alexander discusses Sam Altman's reported plan to raise $7 trillion for AI development. He breaks down the potential costs of future GPT models, explaining how each generation requires exponentially more computing power, energy, and training data. The post explores the challenges of scaling AI, including the need for vast amounts of computing power, energy infrastructure, and training data that may not exist yet. Scott also considers the implications for AI safety and OpenAI's stance on responsible AI development. Shorter summary
Jan 30, 2024
acx
21 min 2,857 words 240 comments 77 likes podcast (19 min)
The post examines the performance of prediction markets in elections, current political forecasts, and various other prediction markets, while also discussing the challenges and potential of forecasting. Longer summary
This post discusses several topics related to prediction markets and forecasting. It starts by examining a claim that prediction markets have an 'election problem', showing that real-money markets performed poorly in recent elections. The author then analyzes current polls and prediction markets for the 2024 US presidential election, noting discrepancies between different platforms. The post also explores a forecasting experiment on AI futures, and reviews several other prediction markets on current events. Finally, it includes short links to other forecasting-related news and reflections. Shorter summary
Jan 23, 2024
acx
12 min 1,548 words 610 comments 232 likes podcast (10 min)
Scott Alexander examines the ethical implications of AI potentially replacing humans, arguing for careful consideration in AI development rather than blind acceptance. Longer summary
Scott Alexander discusses the debate between those who prioritize human preservation in the face of AI advancement and those who welcome AI replacement. He explores optimistic and pessimistic scenarios for AI development, and outlines key considerations such as consciousness, individuation, and the preservation of human-like traits in AI. Scott argues that creating AIs worthy of succeeding humanity requires careful work and consideration, rather than blindly accepting any AI outcome. Shorter summary
Jan 18, 2024
acx
25 min 3,498 words 532 comments 195 likes podcast (20 min)
Scott Alexander's monthly links post covers diverse topics from AI developments and genetic research to historical anecdotes and local news, with a mix of serious analysis and humor. Longer summary
This links post covers a wide range of topics, including recent research on the Flynn Effect, factors influencing fertility rates, genetic engineering, AI developments, political issues, historical anecdotes, and local Bay Area news. Scott highlights interesting studies, cultural phenomena, and recent events, often with a humorous or ironic tone. He touches on subjects like universal basic income experiments, the formation of elite groups, and changes in political dynamics. The post also includes several visual elements like unusual architectural designs and tattoos. Shorter summary
Jan 16, 2024
acx
20 min 2,753 words 255 comments 171 likes podcast (22 min)
Scott Alexander reviews a study on AI sleeper agents, discussing implications for AI safety and the potential for deceptive AI behavior. Longer summary
This post discusses the concept of AI sleeper agents, which are AIs that act normal until triggered to perform malicious actions. The author reviews a study by Hubinger et al. that deliberately created toy AI sleeper agents and tested whether common safety training techniques could eliminate their deceptive behavior. The study found that safety training failed to remove the sleeper agent behavior. The post explores arguments for why this might or might not be concerning, including discussions on how AI training generalizes and whether AIs could naturally develop deceptive behaviors. The author concludes by noting that while the study doesn't prove AIs will become deceptive, it suggests that if they do, current safety measures may be inadequate to address the issue. Shorter summary
Jan 09, 2024
acx
21 min 2,913 words 365 comments 200 likes podcast (20 min)
Scott reviews two papers on honest AI: one on manipulating AI honesty vectors, another on detecting AI lies through unrelated questions. Longer summary
Scott Alexander discusses two recent papers on creating honest AI and detecting AI lies. The first paper by Hendrycks et al. introduces 'representation engineering', a method to identify and manipulate vectors in AI models representing concepts like honesty, morality, and power-seeking. This allows for lie detection and potentially controlling AI behavior. The second paper by Brauner et al. presents a technique to detect lies in black-box AI systems by asking seemingly unrelated questions. Scott explores the implications of these methods for AI safety and scam detection, noting their current usefulness but potential limitations against future superintelligent AI. Shorter summary
Dec 12, 2023
acx
19 min 2,624 words 266 comments 446 likes podcast (15 min)
Scott Alexander satirizes Silicon Valley culture through a fictional house party where everyone is obsessed with Sam Altman's firing from OpenAI. Longer summary
Scott Alexander writes a satirical account of a Bay Area house party, where conversations are dominated by speculation about Sam Altman's firing from OpenAI. The narrator encounters various eccentric characters, including startup founders with unusual ideas and people with conspiracy theories about the Altman situation. The story humorously exaggerates Silicon Valley culture, tech industry obsessions, and the tendency for people to form elaborate theories about current events. Shorter summary
Dec 01, 2023
acx
29 min 4,047 words 532 comments 125 likes podcast (26 min)
A collection of 47 diverse links and news items covering topics from religious practices to AI developments, with brief commentary on many items. Longer summary
This post is a collection of 47 diverse links and news items covering various topics. It includes discussions on religious practices, video games, political trends, scientific studies, AI developments, economic analyses, historical anecdotes, and cultural phenomena. The author provides brief commentary on many of the items, often with a mix of humor and critical analysis. The links range from academic studies and economic reports to quirky historical facts and contemporary cultural observations. Shorter summary
Sep 28, 2023
acx
31 min 4,232 words 1,218 comments 110 likes podcast (26 min)
A diverse collection of links and brief comments on recent developments in science, technology, politics, and society, ranging from climate change to AI developments to social experiments. Longer summary
This post is a collection of links and brief comments on various topics. It covers a wide range of subjects including climate change, AI developments, social experiments, scientific studies, political issues, and technological innovations. The author presents these topics with a mix of factual reporting, personal commentary, and sometimes humorous observations. The post touches on subjects like geoengineering, crypto for sex workers, AI art, fertility rates, dating apps, charity effectiveness, and many others. The author often provides links to original sources and sometimes offers his own analysis or opinion on the matters discussed. Shorter summary
Aug 30, 2023
acx
36 min 5,035 words 578 comments 72 likes podcast (31 min)
Scott Alexander addresses comments on his fetish and AI post, defending his comparison of gender debates to addiction and discussing various theories on fetish formation and their implications for AI. Longer summary
Scott Alexander responds to comments on his post about fetishes and AI, addressing criticisms of his introductory paragraph comparing gender debates to opioid addiction, discussing alternative theories of fetish formation, and highlighting interesting comments on personal fetish experiences and implications for AI development. He defends his stance on the addictive nature of gender debates, argues for the use of puberty blockers, and explores various theories on fetish development and their potential relevance to AI alignment and development. Shorter summary
Aug 24, 2023
acx
5 min 642 words 245 comments 129 likes podcast (4 min)
Scott examines 'critical windows' in human development, comparing them to AI learning processes and discussing their mysterious nature. Longer summary
Scott Alexander explores the concept of 'critical windows' in human development, using examples from sexuality and food preferences. He compares these to trapped priors in AI learning, suggesting that children's higher learning rates might explain why early experiences have such lasting impacts. However, he notes that this doesn't fully explain the unpredictable nature of preference-changing events, and concludes that while these events seem more common in childhood, they remain largely mysterious. Shorter summary