How to avoid getting lost reading Scott Alexander and his 1500+ blog posts? This unaffiliated fan website lets you sort and search through the whole codex. Enjoy!

See also Top Posts and All Tags.

Minutes:
Blog:
Year:
Show all filters
3 posts found
Feb 07, 2016
ssc
35 min 4,490 words 206 comments podcast
Scott Alexander shares and comments on highlights from Philip Tetlock's 'Superforecasting', discussing forecasting, cognitive biases, and organizational effectiveness. Longer summary
This post is a collection of highlights and commentary on Philip Tetlock's book 'Superforecasting'. Scott Alexander shares quotes from the book and provides his own analysis on topics such as evidence-based medicine, cognitive biases in forecasting, the importance of probabilistic thinking, and organizational effectiveness. He also reflects on the implications of these ideas for fields like intelligence analysis, politics, and rationality. Shorter summary
Feb 04, 2016
ssc
17 min 2,156 words 364 comments podcast
Scott Alexander reviews 'Superforecasting' by Philip Tetlock, discussing the traits of highly accurate predictors and the book's validation of rationalist techniques. Longer summary
This post reviews Philip Tetlock's book 'Superforecasting', which explores the qualities of highly accurate predictors. Tetlock's Good Judgment Project identified a group of 'superforecasters' who consistently outperformed others, including CIA analysts. The review discusses the characteristics of these superforecasters, emphasizing their understanding of logic and probability, ability to break down problems, and resistance to cognitive biases. Scott Alexander notes the similarities between superforecasters' methods and rationalist techniques, suggesting the book's value lies in providing high-status validation for these approaches rather than presenting new information to those already familiar with rationality concepts. Shorter summary
Jul 23, 2015
ssc
22 min 2,739 words 391 comments podcast
Scott Alexander explores the possibility of a 'General Factor of Correctness' and its implications for rationality and decision-making across various fields. Longer summary
Scott Alexander discusses the concept of a 'General Factor of Correctness', inspired by Eliezer Yudkowsky's essay on the 'Correct Contrarian Cluster'. He explores whether people who are correct about one controversial topic are more likely to be correct about others, beyond what we'd expect from chance. The post delves into the challenges of identifying such a factor, including separating it from expert consensus agreement, IQ, or education level. Scott examines studies on calibration and prediction accuracy, noting intriguing correlations between calibration skills and certain beliefs. He concludes by emphasizing the importance of this concept to the rationalist project, suggesting that if such a 'correctness skill' exists, cultivating it could be valuable for improving decision-making across various domains. Shorter summary