How to explore Scott Alexander's work and his 1500+ blog posts? This unaffiliated fan website lets you sort and search through the whole codex. Enjoy!

See also Top Posts and All Tags.

Minutes:
Blog:
Year:
Show all filters
3 posts found
Feb 20, 2023
acx
54 min 7,468 words 483 comments 142 likes podcast (48 min)
Scott Alexander grades his 2018 predictions for 2023 and makes new predictions for 2028, with a strong focus on AI developments. Longer summary
Scott Alexander reviews his predictions from 2018 for 2023, grading himself on accuracy across various domains including AI, world affairs, US culture and politics, economics, science/technology, and existential risks. He then offers new predictions for 2028, focusing heavily on AI developments and their potential impacts on society, economics, and politics. Shorter summary
Apr 19, 2021
acx
69 min 9,658 words 1,013 comments 96 likes podcast (60 min)
Scott Alexander evaluates his predictions about the Trump presidency, finding he performed about average overall with some notable successes and failures. Longer summary
Scott Alexander reviews and grades his predictions about Donald Trump's presidency, covering topics from Trump's base diversity to the likelihood of a coup. He analyzes his successes and failures, discussing his performance on prediction markets and his overall accuracy compared to average pundits. Scott concludes that he did about average in his predictions, with some notable successes in race-related predictions and on prediction markets, but also made mistakes in overestimating Trump's competence and underestimating his continued support from Republicans. Shorter summary
Jul 23, 2015
ssc
20 min 2,739 words 391 comments
Scott Alexander explores the possibility of a 'General Factor of Correctness' and its implications for rationality and decision-making across various fields. Longer summary
Scott Alexander discusses the concept of a 'General Factor of Correctness', inspired by Eliezer Yudkowsky's essay on the 'Correct Contrarian Cluster'. He explores whether people who are correct about one controversial topic are more likely to be correct about others, beyond what we'd expect from chance. The post delves into the challenges of identifying such a factor, including separating it from expert consensus agreement, IQ, or education level. Scott examines studies on calibration and prediction accuracy, noting intriguing correlations between calibration skills and certain beliefs. He concludes by emphasizing the importance of this concept to the rationalist project, suggesting that if such a 'correctness skill' exists, cultivating it could be valuable for improving decision-making across various domains. Shorter summary