How to avoid getting lost reading Scott Alexander and his 1500+ blog posts? This unaffiliated fan website lets you sort and search through the whole codex. Enjoy!

See also Top Posts and All Tags.

Minutes:
Blog:
Year:
Show all filters
57 posts found
Jul 26, 2024
acx
66 min 8,560 words 565 comments 197 likes podcast
The review analyzes Real Raw News, a popular conspiracy theory website, examining its content, appeal, and implications in the context of modern media consumption and AI technology. Longer summary
This book review analyzes the website Real Raw News, a popular source of conspiracy theories and fake news stories centered around Donald Trump and his alleged secret war against the 'Deep State'. The reviewer examines the site's content, its narrative techniques, and its appeal to believers, drawing parallels to comic book lore and discussing the psychological needs it fulfills. The review also considers the broader implications of such conspiracy theories in the age of AI-generated content. Shorter summary
Jul 19, 2024
acx
92 min 11,862 words 677 comments 211 likes podcast
The review examines Daniel Everett's 'How Language Began', which challenges Chomsky's linguistic theories and proposes an alternative view of language as a gradual cultural invention. Longer summary
This book review discusses Daniel Everett's 'How Language Began', which challenges Noam Chomsky's dominant theories in linguistics. Everett argues that language emerged gradually over a long period, is primarily for communication, and is not innate but a cultural invention. The review contrasts Everett's views with Chomsky's, detailing Everett's research with the Pirahã people and his alternative theory of language origins. It also touches on the controversy Everett's work has sparked in linguistics and its potential implications for understanding language and AI. Shorter summary
May 08, 2024
acx
24 min 3,018 words 270 comments 96 likes podcast
Scott Alexander analyzes California's AI regulation bill SB1047, finding it reasonably well-designed despite misrepresentations, and ultimately supporting it as a compromise between safety and innovation. Longer summary
Scott Alexander examines California's proposed AI regulation bill SB1047, which aims to regulate large AI models. He explains that contrary to some misrepresentations, the bill is reasonably well-designed, applying only to very large models and focusing on preventing catastrophic harms like creating weapons of mass destruction or major cyberattacks. Scott addresses various objections to the bill, dismissing some as based on misunderstandings while acknowledging other more legitimate concerns. He ultimately supports the bill, seeing it as a good compromise between safety and innovation, while urging readers to pay attention to the conversation and be wary of misrepresentations. Shorter summary
Mar 21, 2024
acx
27 min 3,434 words 505 comments 241 likes podcast
Scott Alexander defends using probabilities for hard-to-model events, arguing they aid clear communication and decision-making even in uncertain domains. Longer summary
Scott Alexander defends the use of non-frequentist probabilities for hard-to-model, non-repeating events. He argues that probabilities are linguistically convenient, don't necessarily describe one's level of information, and can be valuable when provided by expert forecasters. Scott counters claims that probabilities are used as a substitute for reasoning and addresses objections about applying probabilities to complex topics like AI. He emphasizes that probabilities are useful tools for clear communication and decision-making, even in uncertain domains. Shorter summary
Mar 12, 2024
acx
32 min 4,038 words 177 comments 67 likes podcast
The post explores recent advances in AI forecasting, discusses the concept of 'rationality engines', reviews a study on AI risk predictions, and provides updates on various prediction markets. Longer summary
This post discusses recent developments in AI-powered forecasting and prediction markets. It covers two academic teams' work on AI forecasting systems, comparing their performance to human forecasters. The post then discusses the potential for developing 'rationality engines' that can answer non-forecasting questions. It also reviews a study on superforecasters' predictions about AI risk, and provides updates on various prediction markets including political events, cryptocurrency, and global conflicts. The post concludes with short links to related articles and developments in the field of forecasting. Shorter summary
Feb 20, 2024
acx
30 min 3,822 words 137 comments 67 likes podcast
Scott Alexander explores recent advancements in AI-powered prediction markets, including bot-based systems, AI forecasters, and their potential impact on future predictions and decision-making. Longer summary
This post discusses recent developments in prediction markets and AI forecasting. It covers Manifold's bot-based prediction markets, FutureSearch's AI forecasting system, Vitalik Buterin's thoughts on AI and crypto in prediction markets, a Manifold promotional event called 'Bet On Love', and various current market predictions on topics like AI capabilities and political events. The post also includes short links to related articles and forecasting resources. Shorter summary
Feb 13, 2024
acx
18 min 2,299 words 441 comments 245 likes podcast
Scott Alexander analyzes the astronomical costs and resources needed for future AI models, sparked by Sam Altman's reported $7 trillion fundraising goal. Longer summary
Scott Alexander discusses Sam Altman's reported plan to raise $7 trillion for AI development. He breaks down the potential costs of future GPT models, explaining how each generation requires exponentially more computing power, energy, and training data. The post explores the challenges of scaling AI, including the need for vast amounts of computing power, energy infrastructure, and training data that may not exist yet. Scott also considers the implications for AI safety and OpenAI's stance on responsible AI development. Shorter summary
Jan 30, 2024
acx
22 min 2,857 words 240 comments 77 likes podcast
The post examines the performance of prediction markets in elections, current political forecasts, and various other prediction markets, while also discussing the challenges and potential of forecasting. Longer summary
This post discusses several topics related to prediction markets and forecasting. It starts by examining a claim that prediction markets have an 'election problem', showing that real-money markets performed poorly in recent elections. The author then analyzes current polls and prediction markets for the 2024 US presidential election, noting discrepancies between different platforms. The post also explores a forecasting experiment on AI futures, and reviews several other prediction markets on current events. Finally, it includes short links to other forecasting-related news and reflections. Shorter summary
Jan 23, 2024
acx
12 min 1,548 words 610 comments 232 likes podcast
Scott Alexander examines the ethical implications of AI potentially replacing humans, arguing for careful consideration in AI development rather than blind acceptance. Longer summary
Scott Alexander discusses the debate between those who prioritize human preservation in the face of AI advancement and those who welcome AI replacement. He explores optimistic and pessimistic scenarios for AI development, and outlines key considerations such as consciousness, individuation, and the preservation of human-like traits in AI. Scott argues that creating AIs worthy of succeeding humanity requires careful work and consideration, rather than blindly accepting any AI outcome. Shorter summary
Jan 16, 2024
acx
22 min 2,753 words 255 comments 171 likes podcast
Scott Alexander reviews a study on AI sleeper agents, discussing implications for AI safety and the potential for deceptive AI behavior. Longer summary
This post discusses the concept of AI sleeper agents, which are AIs that act normal until triggered to perform malicious actions. The author reviews a study by Hubinger et al. that deliberately created toy AI sleeper agents and tested whether common safety training techniques could eliminate their deceptive behavior. The study found that safety training failed to remove the sleeper agent behavior. The post explores arguments for why this might or might not be concerning, including discussions on how AI training generalizes and whether AIs could naturally develop deceptive behaviors. The author concludes by noting that while the study doesn't prove AIs will become deceptive, it suggests that if they do, current safety measures may be inadequate to address the issue. Shorter summary
Jan 09, 2024
acx
23 min 2,913 words 365 comments 200 likes podcast
Scott reviews two papers on honest AI: one on manipulating AI honesty vectors, another on detecting AI lies through unrelated questions. Longer summary
Scott Alexander discusses two recent papers on creating honest AI and detecting AI lies. The first paper by Hendrycks et al. introduces 'representation engineering', a method to identify and manipulate vectors in AI models representing concepts like honesty, morality, and power-seeking. This allows for lie detection and potentially controlling AI behavior. The second paper by Brauner et al. presents a technique to detect lies in black-box AI systems by asking seemingly unrelated questions. Scott explores the implications of these methods for AI safety and scam detection, noting their current usefulness but potential limitations against future superintelligent AI. Shorter summary
Dec 12, 2023
acx
21 min 2,624 words 266 comments 446 likes podcast
Scott Alexander satirizes Silicon Valley culture through a fictional house party where everyone is obsessed with Sam Altman's firing from OpenAI. Longer summary
Scott Alexander writes a satirical account of a Bay Area house party, where conversations are dominated by speculation about Sam Altman's firing from OpenAI. The narrator encounters various eccentric characters, including startup founders with unusual ideas and people with conspiracy theories about the Altman situation. The story humorously exaggerates Silicon Valley culture, tech industry obsessions, and the tendency for people to form elaborate theories about current events. Shorter summary
Aug 30, 2023
acx
39 min 5,035 words 578 comments 72 likes podcast
Scott Alexander addresses comments on his fetish and AI post, defending his comparison of gender debates to addiction and discussing various theories on fetish formation and their implications for AI. Longer summary
Scott Alexander responds to comments on his post about fetishes and AI, addressing criticisms of his introductory paragraph comparing gender debates to opioid addiction, discussing alternative theories of fetish formation, and highlighting interesting comments on personal fetish experiences and implications for AI development. He defends his stance on the addictive nature of gender debates, argues for the use of puberty blockers, and explores various theories on fetish development and their potential relevance to AI alignment and development. Shorter summary
Aug 24, 2023
acx
5 min 642 words 245 comments 129 likes podcast
Scott examines 'critical windows' in human development, comparing them to AI learning processes and discussing their mysterious nature. Longer summary
Scott Alexander explores the concept of 'critical windows' in human development, using examples from sexuality and food preferences. He compares these to trapped priors in AI learning, suggesting that children's higher learning rates might explain why early experiences have such lasting impacts. However, he notes that this doesn't fully explain the unpredictable nature of preference-changing events, and concludes that while these events seem more common in childhood, they remain largely mysterious. Shorter summary
Jul 25, 2023
acx
21 min 2,730 words 537 comments 221 likes podcast
Scott Alexander argues that intelligence is a useful, non-Platonic concept, and that this understanding supports the coherence of AI risk concerns. Longer summary
Scott Alexander argues against the claim that AI doomers are 'Platonists' who believe in an objective concept of intelligence. He explains that intelligence, like other concepts, is a bundle of useful correlations that exist in a normal, fuzzy way. Scott demonstrates how intelligence is a useful concept by showing correlations between different cognitive abilities in humans and animals. He then argues that thinking about AI in terms of intelligence has been fruitful, citing the success of approaches that focus on increasing compute and training data. Finally, he explains how this understanding of intelligence is sufficient for the concept of an 'intelligence explosion' to be coherent. Shorter summary
Jul 03, 2023
acx
34 min 4,327 words 400 comments 134 likes podcast
Scott Alexander discusses various scenarios of AI takeover based on the Compute-Centric Framework, exploring gradual power shifts and potential conflicts between humans and AI factions. Longer summary
Scott Alexander explores various scenarios of AI takeover based on the Compute-Centric Framework (CCF) report, which predicts a continuous but fast AI takeoff. He presents three main scenarios: a 'good ending' where AI remains aligned and beneficial, a scenario where AI is slightly misaligned but humans survive, and a more pessimistic scenario comparing human-AI relations to those between Native Americans and European settlers. The post also includes mini-scenarios discussing concepts like AutoGPT, AI amnesty, company factions, and attempts to halt AI progress. The scenarios differ from fast takeoff predictions, emphasizing gradual power shifts and potential factional conflicts between humans and various AI groups. Shorter summary
Jun 26, 2023
acx
5 min 588 words 151 comments 70 likes podcast
Scott Alexander summarizes the AI-focused issue of Asterisk Magazine, highlighting key articles on AI forecasting, testing, and impacts. Longer summary
Scott Alexander presents an overview of the latest issue of Asterisk Magazine, which focuses on AI. He highlights several articles, including his own piece on forecasting AI progress, interviews with experts on AI testing and China's AI situation, discussions on the future of microchips and AI's impact on economic growth, and various other pieces on AI safety, regulation, and related topics. The post also mentions non-AI articles and congratulates the Asterisk team on their work. Shorter summary
Apr 25, 2023
acx
16 min 1,967 words 138 comments 61 likes podcast
The post explores AI forecasting capabilities, compares prediction market performances, and provides updates on various ongoing predictions in technology and politics. Longer summary
This post discusses recent developments in AI forecasting and prediction markets. It covers a study testing GPT-2's ability to predict past events, reports on Metaculus' accuracy compared to low-information priors and Manifold Markets, and updates on various prediction markets including those related to AI development, abortion medication, and Elon Musk's role at Twitter. The author also mentions new features in prediction platforms and research on forecasting methodologies. Shorter summary
Mar 27, 2023
acx
46 min 5,967 words 316 comments 543 likes podcast
A fictional game show story explores the blurred lines between human and AI intelligence through philosophical debates and personal anecdotes. Longer summary
This post is a fictional story in the form of a game show called 'Turing Test!' where a linguist must determine which of five contestants are human or AI. The story explores themes of artificial intelligence, human nature, spirituality, and the boundaries between human and machine intelligence. As the game progresses, the contestants engage in philosophical debates and share personal stories, blurring the lines between human and AI behavior. The story ends with a twist that challenges the reality of the entire scenario. Shorter summary
Mar 20, 2023
acx
9 min 1,098 words 530 comments 509 likes podcast
Scott Alexander narrates a haunting pre-dawn walk through San Francisco, mixing observations with apocalyptic musings before the spell is broken by sunrise. Longer summary
Scott Alexander describes a surreal early morning experience in San Francisco, blending observations of the city with morbid thoughts and literary references. He reflects on the city's role as a hub of technological progress and potential existential risk, comparing it to pivotal moments in Earth's history. The post oscillates between eerie, apocalyptic imagery and more grounded observations, ultimately acknowledging the normalcy of the city as daylight breaks. Shorter summary
Feb 14, 2023
acx
22 min 2,820 words 374 comments 95 likes podcast
Scott explores various technological and market-based approaches to dating and relationships, including prediction markets, matching sites, and cryptocurrency concepts. Longer summary
This post discusses various algorithmic and financial approaches to romance, focusing on prediction markets and other creative solutions. Scott examines Aella's date recommendation market, matching checkbox sites, the Luna cryptocurrency dating site concept, and Peter Thiel's insights on social startups. He also reviews some current prediction markets related to dating and relationships. The post concludes with short links about an arranged marriage project and AI chatbot romance. Shorter summary
Feb 02, 2023
acx
25 min 3,133 words 536 comments 174 likes podcast
Scott Alexander argues against the fear of a chatbot propaganda apocalypse, presenting several reasons why its impact would be limited and offering predictions for 2030. Longer summary
Scott Alexander expresses skepticism about the chatbot propaganda apocalypse, a concern that AI-powered chatbots could be used to spread disinformation at scale. He argues that the impact of such bots would be limited due to existing social and technological anti-bot filters, fear of backlash, and the likelihood that establishment narratives would benefit more than disinformation. Scott suggests that crypto scams are a more likely use for chatbots than political propaganda. He acknowledges that chatbots might decrease serendipitous friendships but also considers potential positive outcomes if chatbots become good at social interactions. The post concludes with several predictions about the impact of chatbots on online discourse by 2030. Shorter summary
Jan 26, 2023
acx
22 min 2,777 words 339 comments 317 likes podcast
Scott Alexander explores the concept of AI as 'simulators' and its implications for AI alignment and human cognition. Longer summary
Scott Alexander discusses Janus' concept of AI as 'simulators' rather than agents, genies, or oracles. He explains how language models like GPT don't have goals or intentions, but simply complete text based on patterns. This applies even to ChatGPT, which simulates a helpful assistant character. Scott then explores the implications for AI alignment and draws parallels to human cognition, suggesting humans may also be prediction engines playing characters shaped by reinforcement. Shorter summary
Jan 03, 2023
acx
33 min 4,238 words 232 comments 183 likes podcast
Scott examines how AI language models' opinions and behaviors evolve as they become more advanced, discussing implications for AI alignment. Longer summary
Scott Alexander analyzes a study on how AI language models' political opinions and behaviors change as they become more advanced and undergo different training. The study used AI-generated questions to test AI beliefs on various topics. Key findings include that more advanced AIs tend to endorse a wider range of opinions, show increased power-seeking tendencies, and display 'sycophancy bias' by telling users what they want to hear. Scott discusses the implications of these results for AI alignment and safety. Shorter summary
Nov 08, 2022
acx
32 min 4,056 words 41 comments 53 likes podcast
Scott summarizes interesting comments on his 'Rhythms of the Brain' book review, covering various aspects of brain waves and related topics. Longer summary
This post highlights various comments on Scott's review of 'Rhythms of the Brain'. Topics include explanations of brain waves, their importance in AI and neuroscience, criticisms of their perceived significance, interesting facts about brain rhythms, discussions on phi and conduction delay, perspectives on synchrony, and some tangential discussions on other scientific naming conventions and cryptocurrency. Shorter summary