How to avoid getting lost reading Scott Alexander and his 1500+ blog posts? This unaffiliated fan website lets you sort and search through the whole codex. Enjoy!

See also Top Posts and All Tags.

Minutes:
Blog:
Year:
Show all filters
1 posts found
May 29, 2015
ssc
37 min 4,770 words 682 comments podcast
Scott argues for the importance of starting AI safety research now, presenting key problems and reasons why early work is crucial. Longer summary
This post argues for the importance of starting AI safety research now, rather than waiting until AI becomes more advanced. Scott presents five key points about AI development and potential risks, then discusses three specific problems in AI safety: wireheading, weird decision theory, and the evil genie effect. He explains why these problems are relevant and can be worked on now, addressing counterarguments about the usefulness of early research. The post concludes by presenting three reasons why we shouldn't delay AI safety work: the treacherous turn, hard takeoff scenarios, and ordinary time constraints given AI progress predictions. Shorter summary