How to explore Scott Alexander's work and his 1500+ blog posts? This unaffiliated fan website lets you sort and search through the whole codex. Enjoy!

See also Top Posts and All Tags.

Minutes:
Blog:
Year:
Show all filters
14 posts found
Nov 07, 2024
acx
22 min 2,985 words Comments pending
Scott Alexander praises Polymarket's election success but argues their Trump odds were mispriced, explaining why Trump's win doesn't significantly validate their numbers over other forecasters. Longer summary
Scott Alexander congratulates Polymarket for their success during the recent election, but argues that their Trump shares were mispriced by about ten cents. He uses Bayes' Theorem to explain why Trump's victory doesn't significantly vindicate Polymarket's numbers. Scott compares the situation to non-money forecasters like Metaculus versus real-money markets like Polymarket, explaining why he initially trusted the former more. He discusses the impact of a large bettor named Theo on Polymarket's odds and addresses several objections to his argument. Scott concludes that while prediction markets are valuable, they can sometimes fail and require critical thinking. Shorter summary
Jan 31, 2023
acx
35 min 4,767 words 141 comments 56 likes podcast (29 min)
Scott Alexander discusses recent developments in prediction markets and forecasting, including Metaculus' milestone, PredictIt's legal issues, and various prediction market topics. Longer summary
This Mantic Monday post covers several topics related to prediction markets and forecasting. Scott discusses Metaculus reaching its one millionth prediction, PredictIt's legal battle with the CFTC, former Russian President Medvedev's outlandish 2023 predictions, conspiracy theory prediction markets, Scott's own 2022 prediction calibration results, updates on 'scandal markets', and highlights from various current prediction markets. He also shares some thoughts on the challenges and potential pitfalls of certain types of prediction markets. Shorter summary
Aug 04, 2022
acx
15 min 1,986 words 318 comments 89 likes podcast (16 min)
Scott Alexander examines the use of absurdity arguments, reflecting on his critique of Neom and offering strategies to balance absurdity heuristics with careful reasoning. Longer summary
Scott Alexander reflects on his previous post mocking the Neom project, considering whether his use of the absurdity heuristic was justified. He explores the challenges of relying on absurdity arguments, acknowledging that everything ultimately bottoms out in such arguments. The post discusses when it's appropriate to use absurdity heuristics in communication and personal reasoning, and offers strategies for avoiding absurdity bias. These include calibration training, social epistemology, occasional deep dives into fact-checking, and examining why beliefs come to our attention. Scott concludes that while there's no perfect solution, these approaches can help balance the use of absurdity arguments with more rigorous thinking. Shorter summary
Jan 24, 2022
acx
12 min 1,551 words 140 comments 86 likes podcast (17 min)
Scott Alexander evaluates his 2021 predictions, analyzing his performance across various confidence levels and comparing his results to other forecasters and prediction markets. Longer summary
Scott Alexander grades his 2021 predictions, made at the beginning of the year. He lists 108 predictions on various topics including politics, economics, technology, COVID-19, community events, personal life, work, and his blog. The post details which predictions came true (in bold) and which didn't (in italics). Scott then analyzes his performance, breaking down the accuracy rates for different confidence levels. He compares his results to a graph of expected vs. actual accuracy, finding he was slightly underconfident overall. The post concludes with a comparison to other forecasters and prediction markets, showing Scott performed well but was outperformed by both Zvi and the markets. Shorter summary
Apr 05, 2021
acx
14 min 1,888 words 165 comments 61 likes podcast (17 min)
Scott Alexander evaluates his 2020 predictions, finding he was generally overconfident, and discusses the implications for his prediction-making process. Longer summary
Scott Alexander reviews his predictions for 2020, comparing them to actual outcomes. He analyzes his performance in different confidence levels, noting that he was consistently overconfident this year, particularly in the 50% and 95% categories. Scott attributes some of his errors to unexpected events like the prolonged COVID lockdown and the NYT situation. He reflects on whether this overconfidence in an unusually eventful year might balance out his slight underconfidence in more normal years. The post concludes with plans for future prediction exercises and a link to his ongoing prediction log. Shorter summary
Apr 08, 2020
ssc
13 min 1,765 words 91 comments podcast (15 min)
Scott Alexander reviews and analyzes his 2019 predictions, finding he was generally well-calibrated but slightly underconfident across all confidence levels. Longer summary
Scott Alexander reviews his predictions for 2019, made at the beginning of the year. He lists all the predictions, marking which ones came true, which were false, and which were thrown out. The predictions cover various topics including US politics, economics and technology, world events, personal projects, and his personal life. Scott then analyzes his performance, showing that he was generally well-calibrated but slightly underconfident across all confidence levels. He attributes this underconfidence to trying to leave a cushion for unexpected events, which didn't materialize in 2019. Scott notes that his worst failures were underestimating Bitcoin and overestimating SpaceX's ability to launch their crew on schedule. Shorter summary
Jan 22, 2019
ssc
14 min 1,848 words 123 comments podcast (21 min)
Scott Alexander evaluates his 2018 predictions, analyzing his accuracy and discussing factors that affected his performance. Longer summary
Scott Alexander reviews his predictions for 2018, made at the beginning of the year. He lists each prediction, marking those that came true, those that were false, and those he couldn't determine. Scott then analyzes his performance, presenting a calibration chart and discussing his accuracy at different confidence levels. He notes that he performed poorly on 50% predictions and 95% predictions. Scott attributes some of his inaccuracies to two unexpected events: the cryptocurrency crash and a personal breakup, which affected multiple correlated predictions. He concludes by mentioning he'll post 2019 predictions soon and invites readers to share their own predictions. Shorter summary
Jan 02, 2018
ssc
15 min 2,076 words 216 comments podcast (21 min)
Scott Alexander evaluates his 2017 predictions, analyzing his accuracy and calibration across different confidence levels. Longer summary
Scott Alexander reviews his predictions for 2017, made at the beginning of the year. He lists all predictions, marking false ones with strikethrough and uncertain ones in italics. He then analyzes his accuracy, presenting a graph of his calibration. Scott notes he was slightly overconfident at the 70% level, which he tried to correct from last year's underconfidence. He observes a tendency to overestimate how smoothly personal affairs would go and underestimate the US economy. Overall, he's satisfied with his calibration, showing neither global over- nor underconfidence. Shorter summary
Dec 31, 2016
ssc
12 min 1,634 words 130 comments
Scott Alexander evaluates his 2016 predictions, finding good overall calibration with slight underconfidence at 70% probability, consistent with previous years. Longer summary
Scott Alexander reviews his predictions for 2016, comparing them to actual outcomes. He lists predictions for world events and personal/community matters, marking false predictions with strikethrough and true ones intact. He then calculates his accuracy for different confidence levels, finding he was generally well-calibrated but slightly underconfident at 70% probability. He compares this year's results to previous years, noting a similar pattern of underconfidence in medium probabilities. Overall, he considers his 2016 predictions successful and promises predictions for 2017 soon. Shorter summary
Jan 02, 2016
ssc
8 min 992 words 173 comments
Scott Alexander evaluates the accuracy of his 2015 predictions, finding overall good calibration and considering it a successful year. Longer summary
Scott Alexander reviews his predictions for 2015, assessing their accuracy. He lists 35 predictions across world events and personal life, marking successful ones and crossing out failed ones. He then scores them based on confidence levels, presenting the results in a graph. Overall, Scott considers it a successful year for his predictions, with good calibration except at the 50% confidence level. He also comments on Scott Adams' reported prediction success for 2015, suggesting ways to verify the authenticity of such claims and expressing interest in seeing Adams make concrete predictions for 2016. Shorter summary
Aug 20, 2015
ssc
34 min 4,625 words 703 comments
Scott Alexander discusses the problem of overconfidence in probability estimates, arguing that extreme certainty is rarely justified, especially for complex future predictions. Longer summary
Scott Alexander discusses the problem of overconfidence in probability estimates, particularly when people claim to be extremely certain about complex future events. He explains how experiments show that people are often vastly overconfident, even when they claim 99.9999% certainty. Scott argues that extreme confidence is rarely justified, especially for predictions about technological progress or societal changes. He suggests that overconfidence contributes to intolerance and close-mindedness, and that studying history can help reduce overconfidence by showing how often confident predictions have been wrong in the past. Shorter summary
Jul 23, 2015
ssc
20 min 2,739 words 391 comments
Scott Alexander explores the possibility of a 'General Factor of Correctness' and its implications for rationality and decision-making across various fields. Longer summary
Scott Alexander discusses the concept of a 'General Factor of Correctness', inspired by Eliezer Yudkowsky's essay on the 'Correct Contrarian Cluster'. He explores whether people who are correct about one controversial topic are more likely to be correct about others, beyond what we'd expect from chance. The post delves into the challenges of identifying such a factor, including separating it from expert consensus agreement, IQ, or education level. Scott examines studies on calibration and prediction accuracy, noting intriguing correlations between calibration skills and certain beliefs. He concludes by emphasizing the importance of this concept to the rationalist project, suggesting that if such a 'correctness skill' exists, cultivating it could be valuable for improving decision-making across various domains. Shorter summary
Jun 13, 2015
ssc
5 min 615 words 211 comments
Scott Alexander makes belated predictions for 2015, covering world events and personal life with varying confidence levels. Longer summary
Scott Alexander belatedly makes predictions for 2015, covering world events and personal life. He explains his delay and sets out 35 predictions with confidence levels ranging from 50% to 99%. The predictions cover topics such as international conflicts, economic issues, US politics, and personal goals. Scott invites readers to suggest additional predictions. Shorter summary
Jan 01, 2015
ssc
7 min 974 words 91 comments
Scott Alexander evaluates his 2014 predictions, finding himself well-calibrated across various confidence levels, and jokingly declares himself trustworthy on all matters. Longer summary
Scott Alexander reviews his predictions for 2014, made at the start of that year, and evaluates his calibration. He lists 59 predictions covering various topics including politics, world events, personal life, and the rationalist community. Each prediction is marked as a success or failure. Scott then provides a breakdown of his accuracy at different confidence levels, from 50% to 99%. The results show that he was well-calibrated across all levels, with perfect accuracy for predictions at 90% confidence and above. He concludes by declaring himself 'impressively well-calibrated' and jokingly suggests that he should be trusted about everything. The post ends with a mention that 2015 predictions will be coming soon. Shorter summary