How to avoid getting lost reading Scott Alexander and his 1500+ blog posts? This unaffiliated fan website lets you sort and search through the whole codex. Enjoy!

See also Top Posts and All Tags.

Minutes:
Blog:
Year:
Show all filters
3 posts found
Sep 19, 2022
acx
19 min 2,451 words 73 comments 109 likes podcast
Scott Alexander discusses Janus' experiments with GPT-3, exploring its capabilities, quirks, and potential implications. Longer summary
This post discusses Janus' work with GPT-3, exploring its capabilities and quirks. It covers how GPT-3 can generate self-aware stories, the differences between older and newer versions of the model, its tendency to fixate on certain responses, and some amusing experiments. The post highlights the balance between creativity and efficiency in AI language models, and touches on the potential implications of AI development. Shorter summary
Jun 11, 2020
ssc
3 min 294 words 101 comments podcast
Scott Alexander explores the similarities between Wernicke's aphasia and GPT-3's language use, while noting that GPT-3's capabilities likely surpass this neurological comparison. Longer summary
Scott Alexander discusses the two major brain areas involved in language processing: Wernicke's area (handling meaning) and Broca's area (handling structure and flow). He describes how damage to each area results in different types of language impairment, with particular focus on Wernicke's aphasia, where speech retains normal structure but lacks meaning. Scott draws a parallel between this condition and the eerie feeling some people get from GPT-3's language use. However, he concludes that GPT-3's capabilities are likely beyond the simple Broca's/Wernicke's dichotomy, though he expresses interest in understanding the computational considerations behind this neurological split. Shorter summary
Jun 20, 2019
ssc
2 min 136 words 109 comments podcast
Scott Alexander humorously describes AI-generated content simulating humans pretending to be robots pretending to be humans on Reddit. Longer summary
Scott Alexander humorously discusses the intersection of two subreddits: r/totallynotrobots, where humans pretend to be badly-disguised robots, and r/SubSimulatorGPT2, which uses GPT-2 to imitate various subreddits. The result is a AI-generated simulation of humans pretending to be robots pretending to be humans. Scott shares an example of this amusing output and expresses wonder at the current state of technology. Shorter summary