How to explore Scott Alexander's work and his 1500+ blog posts? This unaffiliated fan website lets you sort and search through the whole codex. Enjoy!

See also Top Posts and All Tags.

Minutes:
Blog:
Year:
Show all filters
4 posts found
Jul 03, 2023
acx
31 min 4,327 words 400 comments 134 likes podcast (26 min)
Scott Alexander discusses various scenarios of AI takeover based on the Compute-Centric Framework, exploring gradual power shifts and potential conflicts between humans and AI factions. Longer summary
Scott Alexander explores various scenarios of AI takeover based on the Compute-Centric Framework (CCF) report, which predicts a continuous but fast AI takeoff. He presents three main scenarios: a 'good ending' where AI remains aligned and beneficial, a scenario where AI is slightly misaligned but humans survive, and a more pessimistic scenario comparing human-AI relations to those between Native Americans and European settlers. The post also includes mini-scenarios discussing concepts like AutoGPT, AI amnesty, company factions, and attempts to halt AI progress. The scenarios differ from fast takeoff predictions, emphasizing gradual power shifts and potential factional conflicts between humans and various AI groups. Shorter summary
Aug 25, 2022
acx
43 min 5,916 words 394 comments 55 likes podcast (40 min)
Scott Alexander summarizes and responds to comments on his review of 'What We Owe The Future', addressing debates around population ethics, longtermism, and moral philosophy. Longer summary
This post highlights key comments on Scott Alexander's review of William MacAskill's book 'What We Owe The Future'. It covers various reactions and debates around topics like the repugnant conclusion in population ethics, longtermism, moral philosophy, AI risk, and the nature of happiness and suffering. Scott responds to several comments, clarifying his views on philosophy, moral reasoning, and the challenges of population ethics. Shorter summary
Feb 15, 2018
ssc
23 min 3,126 words 597 comments podcast (29 min)
Scott Alexander provides detailed predictions for the next five years on topics ranging from AI and politics to science and culture, with probability estimates for specific outcomes. Longer summary
Scott Alexander makes predictions for the next five years, covering a wide range of topics including AI, politics, culture wars, economics, technology, and science. He discusses potential developments in AI capabilities, European politics, global economic trends, religious shifts in the US, the future of US political parties, culture war dynamics, healthcare and economic divides, cryptocurrency, genetic research, space exploration, and global risks. The post is structured with general predictions followed by specific numbered predictions with probability estimates for each topic. Scott maintains a skeptical tone about dramatic changes, often predicting gradual shifts or continuations of current trends rather than radical transformations. Shorter summary
Feb 06, 2015
ssc
6 min 719 words 595 comments
A satirical future op-ed argues that not giving children 'super-enhancement gene therapy' is child abuse, mirroring current pro-vaccination arguments. Longer summary
This satirical post, written as if from the future year 2065, critiques current anti-vaccination arguments by applying them to a hypothetical future technology: super-enhancement designer baby gene therapy. The author, posing as a bioethicist, argues that not giving children this therapy is child abuse and a public health issue. The post mimics common pro-vaccination arguments, citing increased crime rates, car accidents, and disease outbreaks as consequences of not enhancing children. It concludes by calling for severe restrictions on unenhanced children and punishment for parents who refuse the therapy. The satire aims to highlight the absurdity of current anti-vaccination arguments by applying similar logic to a more extreme scenario. Shorter summary