How to avoid getting lost reading Scott Alexander and his 1500+ blog posts? This unaffiliated fan website lets you sort and search through the whole codex. Enjoy!

See also Top Posts and All Tags.

Minutes:
Blog:
Year:
Show all filters
6 posts found
Jul 16, 2024
acx
57 min 7,338 words 489 comments 155 likes podcast
Daniel Böttger proposes a new theory of consciousness as recursive reflections of neural oscillations, explaining qualia and suggesting experimental tests. Longer summary
This guest post by Daniel Böttger proposes a new theory of consciousness, describing it as recursive reflections of neural oscillations. The theory posits that qualia arise from the internal processing of information within oscillating neural patterns, which can reflect on themselves. The post explains how this theory accounts for various characteristics of qualia and consciousness, and suggests ways to test the theory using EEG source analysis. Shorter summary
Jul 25, 2023
acx
21 min 2,730 words 537 comments 221 likes podcast
Scott Alexander argues that intelligence is a useful, non-Platonic concept, and that this understanding supports the coherence of AI risk concerns. Longer summary
Scott Alexander argues against the claim that AI doomers are 'Platonists' who believe in an objective concept of intelligence. He explains that intelligence, like other concepts, is a bundle of useful correlations that exist in a normal, fuzzy way. Scott demonstrates how intelligence is a useful concept by showing correlations between different cognitive abilities in humans and animals. He then argues that thinking about AI in terms of intelligence has been fruitful, citing the success of approaches that focus on increasing compute and training data. Finally, he explains how this understanding of intelligence is sufficient for the concept of an 'intelligence explosion' to be coherent. Shorter summary
Jun 03, 2023
acx
22 min 2,750 words 407 comments 170 likes podcast
A review of 'Why Machines Will Never Rule the World', presenting its arguments against AGI based on complexity and computability, while critically examining its conclusions and relevance. Longer summary
This review examines 'Why Machines Will Never Rule the World' by Jobst Landgrebe and Barry Smith, a book arguing against the possibility of artificial general intelligence (AGI). The reviewer presents the book's main arguments, which center on the complexity of human intelligence and the limitations of computational systems. While acknowledging the book's thorough research and engagement with various fields, the reviewer remains unconvinced by its strong conclusions. The review discusses counterarguments, including the current capabilities of language models and the uncertainty surrounding future AI developments. It concludes by suggesting alternative interpretations of the book's arguments and questioning the practical implications of such theoretical debates. Shorter summary
Mar 27, 2023
acx
46 min 5,967 words 316 comments 543 likes podcast
A fictional game show story explores the blurred lines between human and AI intelligence through philosophical debates and personal anecdotes. Longer summary
This post is a fictional story in the form of a game show called 'Turing Test!' where a linguist must determine which of five contestants are human or AI. The story explores themes of artificial intelligence, human nature, spirituality, and the boundaries between human and machine intelligence. As the game progresses, the contestants engage in philosophical debates and share personal stories, blurring the lines between human and AI behavior. The story ends with a twist that challenges the reality of the entire scenario. Shorter summary
Feb 28, 2019
ssc
7 min 795 words 224 comments podcast
Scott Alexander presents a series of nested dialogues exploring the nature of understanding and meaning, from AI to humans to angels to God, questioning what true understanding entails. Longer summary
This post is a philosophical exploration of the nature of understanding and meaning, presented through a series of nested dialogues. It starts with two children discussing an AI's understanding of water, moves to chemists debating the children's understanding, then to angels contemplating human understanding, and finally to God observing it all. Each level reveals a deeper layer of understanding, while simultaneously highlighting the limitations of the previous level. The post uses these dialogues to question what it truly means to understand something, and whether any level of understanding can be considered complete or meaningful. Shorter summary
Jan 28, 2014
ssc
9 min 1,095 words 69 comments podcast
Scott compares two visions of a 'wirehead society' in the far future, exploring how framing affects our perception of technologically omnipotent posthuman existence. Longer summary
This post explores two visions of a far future 'wirehead society' where posthuman descendants achieve technological omnipotence. The first vision describes a world where all activities become boring and meaningless due to perfect optimization, leading to potential solutions like imposed artificial limits or wireheading. The second vision reframes wireheading as a more noble pursuit, likening it to enlightened beings in a state of blissful tranquility. Scott reflects on how his perception of these futures shifts dramatically based on presentation, despite their fundamental similarities. Shorter summary