How to avoid getting lost reading Scott Alexander and his 1500+ blog posts? This unaffiliated fan website lets you sort and search through the whole codex. Enjoy!

See also Top Posts and All Tags.

Minutes:
Blog:
Year:
Show all filters
9 posts found
Nov 27, 2023
acx
28 min 3,513 words 234 comments 288 likes podcast
Scott Alexander discusses recent breakthroughs in AI interpretability, explaining how researchers are beginning to understand the internal workings of neural networks. Longer summary
Scott Alexander explores recent advancements in AI interpretability, focusing on Anthropic's 'Towards Monosemanticity' paper. He explains how AI neural networks function, introduces the concept of superposition where fewer neurons represent multiple concepts, and describes how researchers have managed to interpret AI's internal workings by projecting real neurons into simulated neurons. The post discusses the implications of this research for understanding both artificial and biological neural systems, as well as its potential impact on AI safety and alignment. Shorter summary
Jun 14, 2023
acx
38 min 4,870 words 181 comments 207 likes podcast
Scott Alexander examines the 'canalization' theory in computational psychiatry and its refinement through deep learning concepts in the Deep CANAL model. Longer summary
Scott Alexander discusses a new paradigm in computational psychiatry called 'canalization', which models mental processes as an energy landscape with valleys representing attractors or persistent beliefs/behaviors. He then explores a follow-up paper that applies concepts from deep learning to refine this theory, introducing the Deep CANAL model. This model attempts to explain various mental disorders by mapping them onto different types of computational issues in artificial neural networks, such as overfitting/underfitting and the stability/plasticity dilemma. Scott expresses both interest and skepticism about this approach, noting its potential insights but also its limitations and potential contradictions with other theories. Shorter summary
Aug 18, 2022
acx
14 min 1,815 words 237 comments 167 likes podcast
Scott Alexander examines why skills plateau, proposing decay and interference hypotheses to explain the phenomenon across various fields. Longer summary
Scott Alexander explores why skills plateau despite continued practice, focusing on creative artists, doctors, and formal education. He presents two main hypotheses: the decay hypothesis, where knowledge is forgotten if not regularly reviewed, and the interference hypothesis, where similar information blends together, making it difficult to learn new things in the same domain. The post discusses how these hypotheses explain various learning phenomena, including the ability to learn multiple unrelated skills simultaneously. Scott also considers edge cases and potential applications of these theories, such as mnemonic devices and language learning strategies. Shorter summary
Apr 14, 2021
acx
5 min 540 words 85 comments 46 likes podcast
Scott Alexander discusses recent research unifying predictive coding in the brain with backpropagation in machine learning, exploring its implications for AI and neuroscience. Longer summary
Scott Alexander discusses a recent paper and Less Wrong post that unify predictive coding, a theory of how the brain works, with backpropagation, an algorithm used in machine learning. The post explains the significance of this unification, which shows that predictive coding can approximate backpropagation without needing backwards information transfer in neurons. Scott explores the implications of this research, including the potential fusion of AI and neuroscience into a single mathematical field and possibilities for neuromorphic computing hardware. Shorter summary
Jun 11, 2020
ssc
3 min 294 words 101 comments podcast
Scott Alexander explores the similarities between Wernicke's aphasia and GPT-3's language use, while noting that GPT-3's capabilities likely surpass this neurological comparison. Longer summary
Scott Alexander discusses the two major brain areas involved in language processing: Wernicke's area (handling meaning) and Broca's area (handling structure and flow). He describes how damage to each area results in different types of language impairment, with particular focus on Wernicke's aphasia, where speech retains normal structure but lacks meaning. Scott draws a parallel between this condition and the eerie feeling some people get from GPT-3's language use. However, he concludes that GPT-3's capabilities are likely beyond the simple Broca's/Wernicke's dichotomy, though he expresses interest in understanding the computational considerations behind this neurological split. Shorter summary
Jun 10, 2020
ssc
28 min 3,621 words 263 comments podcast
Scott Alexander examines GPT-3's capabilities, improvements over GPT-2, and potential implications for AI development through scaling. Longer summary
Scott Alexander discusses GPT-3, a large language model developed by OpenAI. He compares its capabilities to its predecessor GPT-2, noting improvements in text generation and basic arithmetic. The post explores the implications of GPT-3's performance, discussing scaling laws in neural networks and potential future developments. Scott ponders whether continued scaling of such models could lead to more advanced AI capabilities, while also considering the limitations and uncertainties surrounding this approach. Shorter summary
Feb 19, 2019
ssc
27 min 3,491 words 262 comments podcast
Scott Alexander explores GPT-2's unexpected capabilities and argues that it demonstrates the potential for AI to develop abilities beyond its explicit programming, challenging skepticism about AGI. Longer summary
This post discusses GPT-2, a language model AI, and its implications for artificial general intelligence (AGI). Scott Alexander argues that while GPT-2 is not AGI, it demonstrates unexpected capabilities that arise from its training in language prediction. He compares GPT-2's learning process to human creativity and understanding, suggesting that both rely on pattern recognition and recombination of existing information. The post explores examples of GPT-2's abilities, such as rudimentary counting, acronym creation, and translation, which were not explicitly programmed. Alexander concludes that while GPT-2 is far from true AGI, it shows that AI can develop unexpected capabilities, challenging the notion that AGI is impossible or unrelated to current AI work. Shorter summary
Oct 30, 2016
ssc
18 min 2,240 words 141 comments podcast
Scott Alexander examines how recent AI progress in neural networks might challenge the Bostromian paradigm of AI risk, exploring potential implications for AI goal alignment and motivation systems. Longer summary
This post discusses how recent advances in AI, particularly in neural networks and deep learning, might affect the Bostromian paradigm of AI risk. Scott Alexander explores two perspectives: the engineer's view that categorization abilities are just tools and not the core of AGI, and the biologist's view that brain-like neural networks might be adaptable to create motivation systems. He suggests that categorization and abstraction might play a crucial role in developing AI moral sense and motivation, potentially leading to AIs that are less likely to be extreme goal-maximizers. The post ends by acknowledging MIRI's work on logical AI safety while suggesting the need for research in other directions as well. Shorter summary
Jun 25, 2014
ssc
6 min 753 words 57 comments podcast
Scott Alexander humorously explores the World Cup's complex rules, game theory in soccer, and unusual incentive structures in international tournaments. Longer summary
Scott Alexander humorously discusses the World Cup, focusing on its complex advancement rules and the strategic implications for teams. He explores the idea of the tournament as a primitive neural net, and ponders whether a large enough soccer tournament could achieve sentience. The post then delves into game theory, discussing the potential for teams to cooperate for mutual benefit in certain scenarios. It concludes with an anecdote about a 1994 Caribbean Cup game that had an extremely unusual incentive structure due to a peculiar rule. Shorter summary