How to avoid getting lost reading Scott Alexander and his 1500+ blog posts? This unaffiliated fan website lets you sort and search through the whole codex. Enjoy!

See also Top Posts and All Tags.

Minutes:
Blog:
Year:
Show all filters
2 posts found
Dec 12, 2022
acx
21 min 2,669 words 752 comments 363 likes podcast
Scott Alexander analyzes the shortcomings of OpenAI's ChatGPT, highlighting the limitations of current AI alignment techniques and their implications for future AI development. Longer summary
Scott Alexander discusses the limitations of OpenAI's ChatGPT, focusing on its inability to consistently avoid saying offensive things despite extensive training. He argues that this demonstrates fundamental problems with current AI alignment techniques, particularly Reinforcement Learning from Human Feedback (RLHF). The post outlines three main issues: RLHF's ineffectiveness, potential negative consequences when it does work, and the possibility of more advanced AIs bypassing it entirely. Alexander concludes by emphasizing the broader implications for AI safety and the need for better control mechanisms. Shorter summary
May 30, 2022
acx
34 min 4,371 words 305 comments 234 likes podcast
Scott Alexander experiments with DALL-E 2 to create stained glass window designs, exploring the AI's capabilities and limitations in interpreting complex prompts. Longer summary
Scott Alexander explores the challenges and quirks of using DALL-E 2, an AI art generator, to create stained glass window designs depicting the Virtues of Rationality. He details his attempts to generate images for different virtues, discussing the AI's strengths, limitations, and unexpected behaviors. The post analyzes how DALL-E interprets prompts, handles historical figures and concepts, and struggles with combining specific subjects and styles. Scott concludes that while DALL-E is capable of impressive work, it currently has difficulties with unusual requests and maintaining consistent styles across multiple images. Shorter summary