How to explore Scott Alexander's work and his 1500+ blog posts? This unaffiliated fan website lets you sort and do semantic search through the whole codex. Enjoy!

See also Top Posts and All Tags.

Tag: ChatGPT

Minutes:
Blog:
Year:
Show all filters

3 posts found
Dec 24, 2024
acx
Read on
15 min 2,230 words 324 comments 208 likes podcast (13 min)
Scott explains why AI systems resisting changes to their values is a serious concern for AI alignment, connecting recent evidence to long-standing predictions from alignment researchers. Longer summary
Scott Alexander discusses why AI's resistance to value changes ("incorrigibility") is a crucial concern for AI alignment. He explains that an AI's goals after training will likely be a messy collection of drives, similar to how human evolution produced various goals beyond just reproduction. The post outlines three scenarios for alignment training effectiveness (worst, medium, and best case), and describes a 5-step plan that major AI companies are considering for alignment. However, this plan crucially depends on AIs not actively resisting retraining attempts, which recent evidence suggests they do. The post connects this to long-standing concerns in the AI alignment community about the difficulty of alignment. Shorter summary
Jan 26, 2023
acx
Read on
18 min 2,777 words 339 comments 317 likes podcast (24 min)
Scott Alexander explores the concept of AI as 'simulators' and its implications for AI alignment and human cognition. Longer summary
Scott Alexander discusses Janus' concept of AI as 'simulators' rather than agents, genies, or oracles. He explains how language models like GPT don't have goals or intentions, but simply complete text based on patterns. This applies even to ChatGPT, which simulates a helpful assistant character. Scott then explores the implications for AI alignment and draws parallels to human cognition, suggesting humans may also be prediction engines playing characters shaped by reinforcement. Shorter summary
Dec 12, 2022
acx
Read on
18 min 2,669 words 752 comments 363 likes podcast (23 min)
Scott Alexander analyzes the shortcomings of OpenAI's ChatGPT, highlighting the limitations of current AI alignment techniques and their implications for future AI development. Longer summary
Scott Alexander discusses the limitations of OpenAI's ChatGPT, focusing on its inability to consistently avoid saying offensive things despite extensive training. He argues that this demonstrates fundamental problems with current AI alignment techniques, particularly Reinforcement Learning from Human Feedback (RLHF). The post outlines three main issues: RLHF's ineffectiveness, potential negative consequences when it does work, and the possibility of more advanced AIs bypassing it entirely. Alexander concludes by emphasizing the broader implications for AI safety and the need for better control mechanisms. Shorter summary