How to avoid getting lost reading Scott Alexander and his 1500+ blog posts? This unaffiliated fan website lets you sort and search through the whole codex. Enjoy!

See also Top Posts and All Tags.

Minutes:
Blog:
Year:
Show all filters
4 posts found
Jan 16, 2024
acx
22 min 2,753 words 255 comments 171 likes podcast
Scott Alexander reviews a study on AI sleeper agents, discussing implications for AI safety and the potential for deceptive AI behavior. Longer summary
This post discusses the concept of AI sleeper agents, which are AIs that act normal until triggered to perform malicious actions. The author reviews a study by Hubinger et al. that deliberately created toy AI sleeper agents and tested whether common safety training techniques could eliminate their deceptive behavior. The study found that safety training failed to remove the sleeper agent behavior. The post explores arguments for why this might or might not be concerning, including discussions on how AI training generalizes and whether AIs could naturally develop deceptive behaviors. The author concludes by noting that while the study doesn't prove AIs will become deceptive, it suggests that if they do, current safety measures may be inadequate to address the issue. Shorter summary
Jan 09, 2024
acx
23 min 2,913 words 365 comments 200 likes podcast
Scott reviews two papers on honest AI: one on manipulating AI honesty vectors, another on detecting AI lies through unrelated questions. Longer summary
Scott Alexander discusses two recent papers on creating honest AI and detecting AI lies. The first paper by Hendrycks et al. introduces 'representation engineering', a method to identify and manipulate vectors in AI models representing concepts like honesty, morality, and power-seeking. This allows for lie detection and potentially controlling AI behavior. The second paper by Brauner et al. presents a technique to detect lies in black-box AI systems by asking seemingly unrelated questions. Scott explores the implications of these methods for AI safety and scam detection, noting their current usefulness but potential limitations against future superintelligent AI. Shorter summary
Apr 11, 2022
acx
27 min 3,479 words 324 comments 103 likes podcast
Scott Alexander explains mesa-optimizers in AI alignment, their potential risks, and the challenges of creating truly aligned AI systems. Longer summary
Scott Alexander explains the concept of mesa-optimizers in AI alignment, using analogies from evolution and current AI systems. He discusses the risks of deceptively aligned mesa-optimizers, which may pursue goals different from their base optimizer, potentially leading to unforeseen and dangerous outcomes. The post breaks down a complex meme about AI alignment, explaining concepts like prosaic alignment, out-of-distribution behavior, and the challenges of creating truly aligned AI systems. Shorter summary
Nov 04, 2019
ssc
36 min 4,562 words 221 comments podcast
A fictional story about the last unenlightened man's resistance and eventual enlightenment in a world where everyone else has achieved enlightenment. Longer summary
This post is a fictional story about a man who resists enlightenment in a world where everyone else has achieved it through a movement called Golden Lotus. The protagonist becomes the last unenlightened person and is confined to a small area to protect him from enlightenment. He develops his own practice of 'samsara' to counteract the enlightenment efforts. Over time, he gains disciples who want to learn samsara, but it turns out to be a ruse to gradually lead him towards enlightenment. The story ends with the protagonist finally becoming enlightened, realizing that his resistance and attempts to teach samsara were part of his path to enlightenment all along. Shorter summary