How to avoid getting lost reading Scott Alexander and his 1500+ blog posts? This unaffiliated fan website lets you sort and search through the whole codex. Enjoy!

See also Top Posts and All Tags.

Minutes:
Blog:
Year:
Show all filters
1 posts found
Jul 17, 2023
acx
25 min 3,140 words 435 comments 190 likes podcast
Scott Alexander critiques Elon Musk's xAI alignment strategy of creating a 'maximally curious' AI, arguing it's both unfeasible and potentially dangerous. Longer summary
Scott Alexander critiques Elon Musk's alignment strategy for xAI, which aims to create a 'maximally curious' AI. He argues that this approach is both unfeasible and potentially dangerous. Scott points out that a curious AI might not prioritize human welfare and could lead to unintended consequences. He also explains that current AI technology cannot reliably implement such specific goals. The post suggests that focusing on getting AIs to follow orders reliably should be the priority, rather than deciding on a single guiding principle now. Scott appreciates Musk's intention to avoid programming specific morality into AI but believes the proposed solution is flawed. Shorter summary