How to avoid getting lost reading Scott Alexander and his 1500+ blog posts? This unaffiliated fan website lets you sort and search through the whole codex. Enjoy!

See also Top Posts and All Tags.

Minutes:
Blog:
Year:
Show all filters
4 posts found
Jan 06, 2020
ssc
11 min 1,343 words 182 comments podcast
Scott Alexander plays chess against GPT-2, an AI language model, and discusses the broader implications of AI's ability to perform diverse tasks without specific training. Longer summary
Scott Alexander describes a chess game he played against GPT-2, an AI language model not designed for chess. Despite neither player performing well, GPT-2 managed to play a decent game without any understanding of chess or spatial concepts. The post then discusses the work of Gwern Branwen and Shawn Presser in training GPT-2 to play chess, showing its ability to learn opening theory and play reasonably well for several moves. Scott reflects on the implications of an AI designed for text prediction being able to perform tasks like writing poetry, composing music, and playing chess without being specifically designed for them. Shorter summary
Feb 19, 2019
ssc
27 min 3,491 words 262 comments podcast
Scott Alexander explores GPT-2's unexpected capabilities and argues that it demonstrates the potential for AI to develop abilities beyond its explicit programming, challenging skepticism about AGI. Longer summary
This post discusses GPT-2, a language model AI, and its implications for artificial general intelligence (AGI). Scott Alexander argues that while GPT-2 is not AGI, it demonstrates unexpected capabilities that arise from its training in language prediction. He compares GPT-2's learning process to human creativity and understanding, suggesting that both rely on pattern recognition and recombination of existing information. The post explores examples of GPT-2's abilities, such as rudimentary counting, acronym creation, and translation, which were not explicitly programmed. Alexander concludes that while GPT-2 is far from true AGI, it shows that AI can develop unexpected capabilities, challenging the notion that AGI is impossible or unrelated to current AI work. Shorter summary
Aug 28, 2015
ssc
16 min 2,008 words 326 comments podcast
Scott Alexander hypothesizes that mystical experiences, hallucinations, and paranoia might be linked to an overactive pattern-matching faculty in the brain. Longer summary
Scott Alexander explores the relationship between mysticism, pattern-matching, and mental health. He suggests that hallucinations, paranoia, and mystical experiences might all be related to an overactive pattern-matching faculty in the brain. The post begins by discussing how the brain's failure modes differ from computers, then explains top-down processing and pattern matching using visual examples. It then connects these concepts to hallucinations, paranoia, and mystical experiences. Scott proposes that certain practices like meditation, drug use, and religious rituals may strengthen the pattern-matching faculty, leading to experiences of universal connectedness or enlightenment. He acknowledges that this hypothesis doesn't explain all aspects of mystical experiences and their benefits. Shorter summary
Mar 15, 2014
ssc
18 min 2,211 words 117 comments podcast
Scott Alexander examines the process of 'crystallizing patterns' in thinking, discussing its benefits and potential pitfalls across various domains. Longer summary
Scott Alexander explores the concept of 'crystallizing patterns' in thinking, using examples from the Less Wrong sequences and C.S. Lewis's writings. He discusses how naming and defining patterns can make them easier to recognize and think about, potentially changing how people view certain issues. The post examines whether this process can ever be wrong or counterproductive, concluding that while it can sometimes be misleading, it's generally beneficial if done carefully. Scott uses various examples to illustrate his points, including political correctness, mainstream media, and religious concepts. Shorter summary