How to explore Scott Alexander's work and his 1500+ blog posts? This unaffiliated fan website lets you sort and search through the whole codex. Enjoy!

See also Top Posts and All Tags.

Minutes:
Blog:
Year:
Show all filters
11 posts found
Jul 19, 2024
acx
85 min 11,862 words 677 comments 211 likes podcast (79 min)
The review examines Daniel Everett's 'How Language Began', which challenges Chomsky's linguistic theories and proposes an alternative view of language as a gradual cultural invention. Longer summary
This book review discusses Daniel Everett's 'How Language Began', which challenges Noam Chomsky's dominant theories in linguistics. Everett argues that language emerged gradually over a long period, is primarily for communication, and is not innate but a cultural invention. The review contrasts Everett's views with Chomsky's, detailing Everett's research with the Pirahã people and his alternative theory of language origins. It also touches on the controversy Everett's work has sparked in linguistics and its potential implications for understanding language and AI. Shorter summary
Jun 03, 2023
acx
20 min 2,750 words 407 comments 170 likes podcast (17 min)
A review of 'Why Machines Will Never Rule the World', presenting its arguments against AGI based on complexity and computability, while critically examining its conclusions and relevance. Longer summary
This review examines 'Why Machines Will Never Rule the World' by Jobst Landgrebe and Barry Smith, a book arguing against the possibility of artificial general intelligence (AGI). The reviewer presents the book's main arguments, which center on the complexity of human intelligence and the limitations of computational systems. While acknowledging the book's thorough research and engagement with various fields, the reviewer remains unconvinced by its strong conclusions. The review discusses counterarguments, including the current capabilities of language models and the uncertainty surrounding future AI developments. It concludes by suggesting alternative interpretations of the book's arguments and questioning the practical implications of such theoretical debates. Shorter summary
Mar 27, 2023
acx
43 min 5,967 words 316 comments 543 likes podcast (39 min)
A fictional game show story explores the blurred lines between human and AI intelligence through philosophical debates and personal anecdotes. Longer summary
This post is a fictional story in the form of a game show called 'Turing Test!' where a linguist must determine which of five contestants are human or AI. The story explores themes of artificial intelligence, human nature, spirituality, and the boundaries between human and machine intelligence. As the game progresses, the contestants engage in philosophical debates and share personal stories, blurring the lines between human and AI behavior. The story ends with a twist that challenges the reality of the entire scenario. Shorter summary
Jan 26, 2023
acx
20 min 2,777 words 339 comments 317 likes podcast (24 min)
Scott Alexander explores the concept of AI as 'simulators' and its implications for AI alignment and human cognition. Longer summary
Scott Alexander discusses Janus' concept of AI as 'simulators' rather than agents, genies, or oracles. He explains how language models like GPT don't have goals or intentions, but simply complete text based on patterns. This applies even to ChatGPT, which simulates a helpful assistant character. Scott then explores the implications for AI alignment and draws parallels to human cognition, suggesting humans may also be prediction engines playing characters shaped by reinforcement. Shorter summary
Jan 03, 2023
acx
31 min 4,238 words 232 comments 183 likes podcast (32 min)
Scott examines how AI language models' opinions and behaviors evolve as they become more advanced, discussing implications for AI alignment. Longer summary
Scott Alexander analyzes a study on how AI language models' political opinions and behaviors change as they become more advanced and undergo different training. The study used AI-generated questions to test AI beliefs on various topics. Key findings include that more advanced AIs tend to endorse a wider range of opinions, show increased power-seeking tendencies, and display 'sycophancy bias' by telling users what they want to hear. Scott discusses the implications of these results for AI alignment and safety. Shorter summary
Jun 07, 2022
acx
28 min 3,787 words 457 comments 120 likes podcast (26 min)
Scott Alexander bets that AI models will quickly overcome current limitations, based on how GPT-3 improved on GPT-2's shortcomings identified by Gary Marcus. Longer summary
Scott Alexander discusses his prediction that AI models will quickly overcome current limitations, using examples of how GPT-3 improved on GPT-2's shortcomings. He analyzes Gary Marcus's critiques of AI capabilities, showing how many issues Marcus pointed out with GPT-2 and GPT-3 were resolved in subsequent versions. While acknowledging Marcus's expertise, Scott argues that the pattern of AI rapidly improving suggests current flaws will likely be fixed soon, though this doesn't necessarily disprove Marcus's deeper concerns about AI's true intelligence. Shorter summary
Jun 10, 2020
ssc
26 min 3,621 words 263 comments podcast (27 min)
Scott Alexander examines GPT-3's capabilities, improvements over GPT-2, and potential implications for AI development through scaling. Longer summary
Scott Alexander discusses GPT-3, a large language model developed by OpenAI. He compares its capabilities to its predecessor GPT-2, noting improvements in text generation and basic arithmetic. The post explores the implications of GPT-3's performance, discussing scaling laws in neural networks and potential future developments. Scott ponders whether continued scaling of such models could lead to more advanced AI capabilities, while also considering the limitations and uncertainties surrounding this approach. Shorter summary
Mar 14, 2019
ssc
17 min 2,349 words 186 comments podcast (18 min)
Scott Alexander examines AI-generated poetry produced by Gwern's GPT-2 model trained on classical poetry, highlighting its strengths and limitations. Longer summary
Scott Alexander reviews Gwern's experiment in training GPT-2 on poetry. The AI-generated poetry shows impressive command of meter and occasionally rhyme, though it tends to degrade in quality after the first few lines. Scott provides numerous examples of the AI's output, ranging from competent imitations of classical styles to more experimental forms. He notes that while the AI sometimes produces nonsensical content, it can also generate surprisingly beautiful and coherent lines. The post concludes with a reflection on how our perceptions of poetry might be influenced by knowing whether it's human or AI-generated. Shorter summary
Feb 28, 2019
ssc
6 min 795 words 224 comments podcast (7 min)
Scott Alexander presents a series of nested dialogues exploring the nature of understanding and meaning, from AI to humans to angels to God, questioning what true understanding entails. Longer summary
This post is a philosophical exploration of the nature of understanding and meaning, presented through a series of nested dialogues. It starts with two children discussing an AI's understanding of water, moves to chemists debating the children's understanding, then to angels contemplating human understanding, and finally to God observing it all. Each level reveals a deeper layer of understanding, while simultaneously highlighting the limitations of the previous level. The post uses these dialogues to question what it truly means to understand something, and whether any level of understanding can be considered complete or meaningful. Shorter summary
Feb 19, 2019
ssc
25 min 3,491 words 262 comments podcast (28 min)
Scott Alexander explores GPT-2's unexpected capabilities and argues that it demonstrates the potential for AI to develop abilities beyond its explicit programming, challenging skepticism about AGI. Longer summary
This post discusses GPT-2, a language model AI, and its implications for artificial general intelligence (AGI). Scott Alexander argues that while GPT-2 is not AGI, it demonstrates unexpected capabilities that arise from its training in language prediction. He compares GPT-2's learning process to human creativity and understanding, suggesting that both rely on pattern recognition and recombination of existing information. The post explores examples of GPT-2's abilities, such as rudimentary counting, acronym creation, and translation, which were not explicitly programmed. Alexander concludes that while GPT-2 is far from true AGI, it shows that AI can develop unexpected capabilities, challenging the notion that AGI is impossible or unrelated to current AI work. Shorter summary
Feb 18, 2019
ssc
19 min 2,532 words 188 comments podcast (17 min)
Scott Alexander draws parallels between OpenAI's GPT-2 language model and human dreaming, exploring their similarities in process and output quality. Longer summary
Scott Alexander compares OpenAI's GPT-2 language model to human dreaming, noting similarities in their processes and outputs. He explains how GPT-2 works by predicting next words in a sequence, much like the human brain predicts sensory input. The post explores why both GPT-2 and dreams produce narratives that are coherent in broad strokes but often nonsensical in details. Scott discusses theories from neuroscience and machine learning to explain this phenomenon, including ideas about model complexity reduction during sleep and comparisons to AI algorithms like the wake-sleep algorithm. He concludes by suggesting that dream-like outputs might simply be what imperfect prediction machines produce, noting that current AI capabilities might be comparable to a human brain operating at very low capacity. Shorter summary