How to avoid getting lost reading Scott Alexander and his 1500+ blog posts? This unaffiliated fan website lets you sort and search through the whole codex. Enjoy!

See also Top Posts and All Tags.

Minutes:
Blog:
Year:
Show all filters
13 posts found
Mar 30, 2023
acx
16 min 2,048 words 1,126 comments 278 likes podcast
Scott Alexander critiques Tyler Cowen's use of the 'Safe Uncertainty Fallacy' in discussing AI risk, arguing that uncertainty doesn't justify complacency. Longer summary
Scott Alexander critiques Tyler Cowen's use of the 'Safe Uncertainty Fallacy' in relation to AI risk. This fallacy argues that because a situation is completely uncertain, it will be fine. Scott explains why this reasoning is flawed, using examples like the printing press and alien starships to illustrate his points. He argues that even in uncertain situations, we need to make best guesses and not default to assuming everything will be fine. Scott criticizes Cowen's lack of specific probability estimates and argues that claiming total uncertainty is intellectually dishonest. The post ends with a satirical twist on Cowen's conclusion about society being designed to 'take the plunge' with new technologies. Shorter summary
Feb 06, 2023
acx
17 min 2,128 words 284 comments 122 likes podcast
Scott Alexander investigates the 'wisdom of crowds' hypothesis using survey data, exploring its effectiveness and potential applications. Longer summary
Scott Alexander discusses the 'wisdom of crowds' hypothesis, which claims that the average of many guesses is better than a single guess. He tests this concept using data from his ACX Survey, focusing on a question about the distance between Moscow and Paris. The post explores how error rates change with crowd size, whether individuals can benefit from averaging multiple guesses, and compares his findings to a larger study by Van Dolder and Van Den Assem. Scott also ponders why wisdom of crowds isn't more widely used in decision-making and speculates on its potential applications and limitations. Shorter summary
Jan 26, 2023
acx
22 min 2,777 words 339 comments 317 likes podcast
Scott Alexander explores the concept of AI as 'simulators' and its implications for AI alignment and human cognition. Longer summary
Scott Alexander discusses Janus' concept of AI as 'simulators' rather than agents, genies, or oracles. He explains how language models like GPT don't have goals or intentions, but simply complete text based on patterns. This applies even to ChatGPT, which simulates a helpful assistant character. Scott then explores the implications for AI alignment and draws parallels to human cognition, suggesting humans may also be prediction engines playing characters shaped by reinforcement. Shorter summary
Dec 16, 2022
acx
5 min 535 words 175 comments 89 likes podcast
Scott Alexander announces a formalized 2023 Prediction Contest with 50 questions, multiple modes of play, and cash prizes. Longer summary
Scott Alexander announces the 2023 Prediction Contest, a formalized version of his annual predictions. The contest includes 50 forecasting questions and 5 demographic questions. Participants can play in Blind Mode (limited research, no external sources) or Full Mode (unlimited research). There are multiple prizes, including $500 for winners in different categories. The contest aims to create a standard for comparing forecasters and forecasting sites, with plans to correlate personality traits with forecasting accuracy in a future ACX Survey. Shorter summary
Apr 14, 2020
ssc
33 min 4,215 words 863 comments podcast
Scott Alexander argues that the media's failure in coronavirus coverage was not about prediction, but about poor probabilistic reasoning and decision-making under uncertainty. Longer summary
This post discusses the media's failure in covering the coronavirus pandemic, arguing that the issue was not primarily one of prediction but of probabilistic reasoning and decision-making under uncertainty. Scott Alexander argues that while predicting the exact course of the pandemic was difficult, the media and experts failed to properly convey and act on the potential risks even when the probability seemed low. He contrasts this with examples of good reasoning from individuals who took the threat seriously early on, not because they were certain it would be catastrophic, but because they understood the importance of preparing for low-probability, high-impact events. Shorter summary
Apr 08, 2020
ssc
14 min 1,765 words 91 comments podcast
Scott Alexander reviews and analyzes his 2019 predictions, finding he was generally well-calibrated but slightly underconfident across all confidence levels. Longer summary
Scott Alexander reviews his predictions for 2019, made at the beginning of the year. He lists all the predictions, marking which ones came true, which were false, and which were thrown out. The predictions cover various topics including US politics, economics and technology, world events, personal projects, and his personal life. Scott then analyzes his performance, showing that he was generally well-calibrated but slightly underconfident across all confidence levels. He attributes this underconfidence to trying to leave a cushion for unexpected events, which didn't materialize in 2019. Scott notes that his worst failures were underestimating Bitcoin and overestimating SpaceX's ability to launch their crew on schedule. Shorter summary
Jan 02, 2018
ssc
16 min 2,076 words 216 comments podcast
Scott Alexander evaluates his 2017 predictions, analyzing his accuracy and calibration across different confidence levels. Longer summary
Scott Alexander reviews his predictions for 2017, made at the beginning of the year. He lists all predictions, marking false ones with strikethrough and uncertain ones in italics. He then analyzes his accuracy, presenting a graph of his calibration. Scott notes he was slightly overconfident at the 70% level, which he tried to correct from last year's underconfidence. He observes a tendency to overestimate how smoothly personal affairs would go and underestimate the US economy. Overall, he's satisfied with his calibration, showing neither global over- nor underconfidence. Shorter summary
Dec 31, 2016
ssc
13 min 1,634 words 130 comments podcast
Scott Alexander evaluates his 2016 predictions, finding good overall calibration with slight underconfidence at 70% probability, consistent with previous years. Longer summary
Scott Alexander reviews his predictions for 2016, comparing them to actual outcomes. He lists predictions for world events and personal/community matters, marking false predictions with strikethrough and true ones intact. He then calculates his accuracy for different confidence levels, finding he was generally well-calibrated but slightly underconfident at 70% probability. He compares this year's results to previous years, noting a similar pattern of underconfidence in medium probabilities. Overall, he considers his 2016 predictions successful and promises predictions for 2017 soon. Shorter summary
Nov 07, 2016
ssc
9 min 1,158 words 953 comments podcast
Scott Alexander argues that the 2016 US election outcome shouldn't drastically change our understanding of politics, given how close the race is. Longer summary
Scott Alexander argues that the outcome of the 2016 US presidential election shouldn't dramatically change our understanding of politics and society. He criticizes both extreme predictions of a certain Hillary Clinton victory and a certain Donald Trump victory, pointing out that the race is close enough that the outcome could be determined by random factors like weather. Alexander suggests that people should precommit to their views on politics and society rather than drastically changing them based on the election result. He uses his own January 2016 prediction of Trump having a 20% chance of winning (conditional on winning the Republican primary) as an example of a reasonable prediction, given that prediction markets on election eve give Trump an 17.9% chance. Shorter summary
Jan 02, 2016
ssc
8 min 992 words 173 comments podcast
Scott Alexander evaluates the accuracy of his 2015 predictions, finding overall good calibration and considering it a successful year. Longer summary
Scott Alexander reviews his predictions for 2015, assessing their accuracy. He lists 35 predictions across world events and personal life, marking successful ones and crossing out failed ones. He then scores them based on confidence levels, presenting the results in a graph. Overall, Scott considers it a successful year for his predictions, with good calibration except at the 50% confidence level. He also comments on Scott Adams' reported prediction success for 2015, suggesting ways to verify the authenticity of such claims and expressing interest in seeing Adams make concrete predictions for 2016. Shorter summary
Jun 13, 2015
ssc
5 min 615 words 211 comments podcast
Scott Alexander makes belated predictions for 2015, covering world events and personal life with varying confidence levels. Longer summary
Scott Alexander belatedly makes predictions for 2015, covering world events and personal life. He explains his delay and sets out 35 predictions with confidence levels ranging from 50% to 99%. The predictions cover topics such as international conflicts, economic issues, US politics, and personal goals. Scott invites readers to suggest additional predictions. Shorter summary
Sep 25, 2013
ssc
5 min 552 words 79 comments podcast
The author analyzes results of a prediction contest about American political opinions, revealing participants' inaccuracies and biases in estimating current views and changes over time. Longer summary
This post discusses the results of a prediction contest where participants estimated current American opinions on political issues and how those opinions have changed over 22 years. The author analyzes the accuracy of predictions, noting that participants were generally poor at estimating current opinions but slightly better at predicting changes. The post reveals that participants tended to overestimate how leftist Americans are and how much society has shifted left. The author also mentions that there was little difference in accuracy between reactionary and progressive participants, and names the most accurate predictors. Shorter summary
Sep 24, 2013
ssc
3 min 333 words 137 comments podcast
Scott Alexander creates a test using Pew Research data to gauge predictions about American values changes from 1987 to 2009, hypothesizing that people will underestimate their own political position's strength. Longer summary
Scott Alexander discusses a test he created using Pew Research data on American values from 1987 to 2009. The test asks participants to predict the percentage of Americans agreeing with certain statements in 2009, estimate the change since 1987, and state their own political position. Scott hypothesizes that most people will underestimate the strength of their own political position due to the underdog effect. He's particularly interested in Neoreactionaries' responses given their belief in the power of the Cathedral. Scott assures that the test isn't rigged and asks participants not to Google the answers. Shorter summary