How to avoid getting lost reading Scott Alexander and his 1500+ blog posts? This unaffiliated fan website lets you sort and search through the whole codex. Enjoy!

See also Top Posts and All Tags.

Minutes:
Blog:
Year:
Show all filters
2 posts found
Oct 03, 2022
acx
42 min 5,447 words 431 comments 67 likes podcast
Scott Alexander explains and analyzes the debate between MIRI and CHAI on AI alignment strategies, focusing on the challenges and potential flaws in CHAI's 'assistance games' approach. Longer summary
This post discusses the debate between MIRI and CHAI regarding AI alignment strategies, focusing on CHAI's 'assistance games' approach and MIRI's critique of it. The author explains the concepts of sovereign and corrigible AI, inverse reinforcement learning, and the challenges in implementing these ideas in modern AI systems. The post concludes with a brief exchange between Eliezer Yudkowsky (MIRI) and Stuart Russell (CHAI), highlighting their differing perspectives on the feasibility and potential pitfalls of the assistance games approach. Shorter summary
Feb 06, 2017
ssc
23 min 2,929 words 480 comments podcast
Scott Alexander shares his experiences and insights from the Asilomar Conference on Beneficial AI, covering various aspects of AI development, risks, and ethical considerations. Longer summary
Scott Alexander recounts his experience at the Asilomar Conference on Beneficial AI. The conference brought together diverse experts to discuss AI risks, from technological unemployment to superintelligence. Key points include: the normalization of AI safety research, economists' views on technological unemployment, proposed solutions like retraining workers, advances in AI goal alignment research, improvements in AlphaGo and its implications, issues with AI transparency, political considerations in AI development, and debates on ethical AI principles. Scott notes the star-studded attendance and the surreal experience of discussing crucial AI topics with leading experts in the field. Shorter summary