How to explore Scott Alexander's work and his 1500+ blog posts? This unaffiliated fan website lets you sort and search through the whole codex. Enjoy!

See also Top Posts and All Tags.

Minutes:
Blog:
Year:
Show all filters

2 posts found
Feb 23, 2022
acx
72 min 11,062 words 385 comments 121 likes podcast (71 min)
Scott Alexander reviews competing methodologies for predicting AI timelines, focusing on Ajeya Cotra's biological anchors approach and Eliezer Yudkowsky's critique. Longer summary
Scott Alexander reviews Ajeya Cotra's report on AI timelines for Open Philanthropy, which uses biological anchors to estimate when transformative AI might arrive, and Eliezer Yudkowsky's critique of this methodology. The post explains Cotra's approach, Yudkowsky's objections, and various responses, ultimately concluding that while the report may not significantly change existing beliefs, the debate highlights important considerations in AI forecasting. Shorter summary
Jun 08, 2017
ssc
16 min 2,467 words 286 comments
Scott analyzes a new survey of AI researchers, showing diverse opinions on AI timelines and risks, with many acknowledging potential dangers but few prioritizing safety research. Longer summary
This post discusses a recent survey of AI researchers about their opinions on AI progress and potential risks. The survey, conducted by Grace et al., shows a wide range of predictions about when human-level AI might be achieved, with significant uncertainty among experts. The post highlights that while many AI researchers acknowledge potential risks from poorly-aligned AI, few consider it among the most important problems in the field. Scott compares these results to a previous survey by Muller and Bostrom, noting some differences in methodology and results. He concludes by expressing encouragement that researchers are taking AI safety arguments seriously, while also pointing out a potential disconnect between acknowledging risks and prioritizing work on them. Shorter summary