How to explore Scott Alexander's work and his 1500+ blog posts? This unaffiliated fan website lets you sort and search through the whole codex. Enjoy!

See also Top Posts and All Tags.

Minutes:
Blog:
Year:
Show all filters
2 posts found
Oct 24, 2024
acx
40 min 5,495 words Comments pending podcast (35 min)
Scott Alexander reports on the Progress Studies Conference, discussing advances in energy, AI, housing policy, and other fields, with an overall optimistic tone about recent developments and potential for future progress. Longer summary
Scott Alexander recounts his experience at the Progress Studies Conference, which focused on various technological and social advancements. The conference covered topics such as energy (particularly solar and nuclear power), politics and regulation, AI, YIMBY movement, self-driving cars, and other areas of potential progress. The overall mood was optimistic about recent developments in these fields, with many speakers highlighting how regulatory changes could accelerate progress. Scott notes that while the conference itself may not be directly responsible for recent advances, it reflects a growing awareness of the importance of technological progress and the need to address regulatory barriers. Shorter summary
Mar 07, 2023
acx
11 min 1,425 words 600 comments 178 likes podcast (9 min)
Scott Alexander uses Kelly betting to argue why AI development, unlike other technologies, poses too great a risk to civilization to pursue aggressively. Longer summary
Scott Alexander responds to Scott Aaronson's argument for being less hostile to AI development. While agreeing with Aaronson's points about nuclear power and other technologies where excessive caution caused harm, Alexander argues that AI is different. He uses the concept of Kelly betting from finance to explain why: even with good bets, you shouldn't risk everything at once. Alexander contends that while technology is generally a great bet, AI development risks 'betting everything' on civilization's future. He concludes that while some AI development is necessary, we must treat existential risks differently than other technological risks. Shorter summary