Scott explores three approaches to 'writing for AI' - teaching knowledge, influencing beliefs, and enabling simulation - finding the first limited, the second theoretically confused, and the third creepy and ethically troubling.
Longer summary
Scott examines the concept of 'writing for AI' - creating content that will influence future AI systems - through three lenses: helping AIs learn knowledge, presenting arguments to shape AI beliefs, and helping AIs model writers in enough detail to recreate them. He finds the first two either limited or theoretically muddled, and the third deeply unsettling. The post explores why influencing AI beliefs faces both practical obstacles (alignment training will override corpus data) and theoretical ones (finding the right sweet spot of influence). Scott is particularly disturbed by the idea of AIs simulating him, comparing it to being 'an ape in some transhuman zoo,' and struggles with questions about whether writers should try to impose their values on future AI systems.
Shorter summary