How to explore Scott Alexander's work and his 1500+ blog posts? This unaffiliated fan website lets you sort and search through the whole codex. Enjoy!

See also Top Posts and All Tags.

Minutes:
Blog:
Year:
Show all filters
2 posts found
Dec 17, 2015
ssc
31 min 4,288 words 798 comments
Scott Alexander argues that OpenAI's open-source strategy for AI development could be dangerous, potentially risking human extinction if AI progresses rapidly. Longer summary
Scott Alexander critiques OpenAI's strategy of making AI research open-source, arguing it could be dangerous if AI develops rapidly. He compares it to giving nuclear weapon plans to everyone, potentially leading to catastrophe. The post analyzes the risks and benefits of open AI, discusses the potential for a hard takeoff in AI development, and examines the AI control problem. Scott expresses concern that competition in AI development may be forcing desperate strategies, potentially risking human extinction. Shorter summary
May 29, 2015
ssc
35 min 4,770 words 682 comments
Scott argues for the importance of starting AI safety research now, presenting key problems and reasons why early work is crucial. Longer summary
This post argues for the importance of starting AI safety research now, rather than waiting until AI becomes more advanced. Scott presents five key points about AI development and potential risks, then discusses three specific problems in AI safety: wireheading, weird decision theory, and the evil genie effect. He explains why these problems are relevant and can be worked on now, addressing counterarguments about the usefulness of early research. The post concludes by presenting three reasons why we shouldn't delay AI safety work: the treacherous turn, hard takeoff scenarios, and ordinary time constraints given AI progress predictions. Shorter summary