How to explore Scott Alexander's work and his 1500+ blog posts? This unaffiliated fan website lets you sort and search through the whole codex. Enjoy!

See also Top Posts and All Tags.

Minutes:
Blog:
Year:
Show all filters
4 posts found
Sep 19, 2022
acx
18 min 2,451 words 73 comments 109 likes podcast (27 min)
Scott Alexander discusses Janus' experiments with GPT-3, exploring its capabilities, quirks, and potential implications. Longer summary
This post discusses Janus' work with GPT-3, exploring its capabilities and quirks. It covers how GPT-3 can generate self-aware stories, the differences between older and newer versions of the model, its tendency to fixate on certain responses, and some amusing experiments. The post highlights the balance between creativity and efficiency in AI language models, and touches on the potential implications of AI development. Shorter summary
Jul 26, 2022
acx
47 min 6,446 words 298 comments 107 likes podcast (42 min)
Scott Alexander examines the Eliciting Latent Knowledge (ELK) problem in AI alignment and various proposed solutions. Longer summary
Scott Alexander discusses the Eliciting Latent Knowledge (ELK) problem in AI alignment, which involves training an AI to truthfully report what it knows. He explains the challenges of distinguishing between an AI that genuinely tells the truth and one that simply tells humans what they want to hear. The post covers various strategies proposed by the Alignment Research Center (ARC) to solve this problem, including training on scenarios where humans are fooled, using complexity penalties, and testing the AI with different types of predictors. Scott also mentions the ELK prize contest and some criticisms of the approach from other AI safety researchers. Shorter summary
Jun 11, 2020
ssc
3 min 294 words 101 comments podcast (4 min)
Scott Alexander explores the similarities between Wernicke's aphasia and GPT-3's language use, while noting that GPT-3's capabilities likely surpass this neurological comparison. Longer summary
Scott Alexander discusses the two major brain areas involved in language processing: Wernicke's area (handling meaning) and Broca's area (handling structure and flow). He describes how damage to each area results in different types of language impairment, with particular focus on Wernicke's aphasia, where speech retains normal structure but lacks meaning. Scott draws a parallel between this condition and the eerie feeling some people get from GPT-3's language use. However, he concludes that GPT-3's capabilities are likely beyond the simple Broca's/Wernicke's dichotomy, though he expresses interest in understanding the computational considerations behind this neurological split. Shorter summary
Jun 10, 2020
ssc
26 min 3,621 words 263 comments podcast (27 min)
Scott Alexander examines GPT-3's capabilities, improvements over GPT-2, and potential implications for AI development through scaling. Longer summary
Scott Alexander discusses GPT-3, a large language model developed by OpenAI. He compares its capabilities to its predecessor GPT-2, noting improvements in text generation and basic arithmetic. The post explores the implications of GPT-3's performance, discussing scaling laws in neural networks and potential future developments. Scott ponders whether continued scaling of such models could lead to more advanced AI capabilities, while also considering the limitations and uncertainties surrounding this approach. Shorter summary