Scott argues for the importance of starting AI safety research now, presenting key problems and reasons why early work is crucial.
Longer summary
This post argues for the importance of starting AI safety research now, rather than waiting until AI becomes more advanced. Scott presents five key points about AI development and potential risks, then discusses three specific problems in AI safety: wireheading, weird decision theory, and the evil genie effect. He explains why these problems are relevant and can be worked on now, addressing counterarguments about the usefulness of early research. The post concludes by presenting three reasons why we shouldn't delay AI safety work: the treacherous turn, hard takeoff scenarios, and ordinary time constraints given AI progress predictions.
Shorter summary