How to explore Scott Alexander's work and his 1500+ blog posts? This unaffiliated fan website lets you sort and search through the whole codex. Enjoy!

See also Top Posts and All Tags.

Minutes:
Blog:
Year:
Show all filters
7 posts found
Sep 18, 2024
acx
19 min 2,577 words 612 comments 325 likes podcast (18 min)
Scott Alexander examines how AI achievements, once considered markers of true intelligence or danger, are often dismissed as unimpressive, potentially leading to concerning AI behaviors being normalized. Longer summary
Scott Alexander discusses recent developments in AI, focusing on two AI systems: Sakana, an 'AI scientist' that can write computer science papers, and Strawberry, an AI that demonstrated hacking abilities. He uses these examples to explore the broader theme of how our perception of AI intelligence and danger has evolved. The post argues that as AI achieves various milestones once thought to indicate true intelligence or danger, humans tend to dismiss these achievements as unimpressive or non-threatening. This pattern leads to a situation where potentially concerning AI behaviors might be normalized and not taken seriously as indicators of real risk. Shorter summary
Jul 16, 2024
acx
53 min 7,338 words 489 comments 155 likes podcast (43 min)
Daniel Böttger proposes a new theory of consciousness as recursive reflections of neural oscillations, explaining qualia and suggesting experimental tests. Longer summary
This guest post by Daniel Böttger proposes a new theory of consciousness, describing it as recursive reflections of neural oscillations. The theory posits that qualia arise from the internal processing of information within oscillating neural patterns, which can reflect on themselves. The post explains how this theory accounts for various characteristics of qualia and consciousness, and suggests ways to test the theory using EEG source analysis. Shorter summary
Jul 25, 2023
acx
20 min 2,730 words 537 comments 221 likes podcast (17 min)
Scott Alexander argues that intelligence is a useful, non-Platonic concept, and that this understanding supports the coherence of AI risk concerns. Longer summary
Scott Alexander argues against the claim that AI doomers are 'Platonists' who believe in an objective concept of intelligence. He explains that intelligence, like other concepts, is a bundle of useful correlations that exist in a normal, fuzzy way. Scott demonstrates how intelligence is a useful concept by showing correlations between different cognitive abilities in humans and animals. He then argues that thinking about AI in terms of intelligence has been fruitful, citing the success of approaches that focus on increasing compute and training data. Finally, he explains how this understanding of intelligence is sufficient for the concept of an 'intelligence explosion' to be coherent. Shorter summary
Jun 03, 2023
acx
20 min 2,750 words 407 comments 170 likes podcast (17 min)
A review of 'Why Machines Will Never Rule the World', presenting its arguments against AGI based on complexity and computability, while critically examining its conclusions and relevance. Longer summary
This review examines 'Why Machines Will Never Rule the World' by Jobst Landgrebe and Barry Smith, a book arguing against the possibility of artificial general intelligence (AGI). The reviewer presents the book's main arguments, which center on the complexity of human intelligence and the limitations of computational systems. While acknowledging the book's thorough research and engagement with various fields, the reviewer remains unconvinced by its strong conclusions. The review discusses counterarguments, including the current capabilities of language models and the uncertainty surrounding future AI developments. It concludes by suggesting alternative interpretations of the book's arguments and questioning the practical implications of such theoretical debates. Shorter summary
Mar 27, 2023
acx
43 min 5,967 words 316 comments 543 likes podcast (39 min)
A fictional game show story explores the blurred lines between human and AI intelligence through philosophical debates and personal anecdotes. Longer summary
This post is a fictional story in the form of a game show called 'Turing Test!' where a linguist must determine which of five contestants are human or AI. The story explores themes of artificial intelligence, human nature, spirituality, and the boundaries between human and machine intelligence. As the game progresses, the contestants engage in philosophical debates and share personal stories, blurring the lines between human and AI behavior. The story ends with a twist that challenges the reality of the entire scenario. Shorter summary
Feb 28, 2019
ssc
6 min 795 words 224 comments podcast (7 min)
Scott Alexander presents a series of nested dialogues exploring the nature of understanding and meaning, from AI to humans to angels to God, questioning what true understanding entails. Longer summary
This post is a philosophical exploration of the nature of understanding and meaning, presented through a series of nested dialogues. It starts with two children discussing an AI's understanding of water, moves to chemists debating the children's understanding, then to angels contemplating human understanding, and finally to God observing it all. Each level reveals a deeper layer of understanding, while simultaneously highlighting the limitations of the previous level. The post uses these dialogues to question what it truly means to understand something, and whether any level of understanding can be considered complete or meaningful. Shorter summary
Jan 28, 2014
ssc
8 min 1,095 words 69 comments
Scott compares two visions of a 'wirehead society' in the far future, exploring how framing affects our perception of technologically omnipotent posthuman existence. Longer summary
This post explores two visions of a far future 'wirehead society' where posthuman descendants achieve technological omnipotence. The first vision describes a world where all activities become boring and meaningless due to perfect optimization, leading to potential solutions like imposed artificial limits or wireheading. The second vision reframes wireheading as a more noble pursuit, likening it to enlightened beings in a state of blissful tranquility. Scott reflects on how his perception of these futures shifts dramatically based on presentation, despite their fundamental similarities. Shorter summary