How to avoid getting lost reading Scott Alexander and his 1500+ blog posts? This unaffiliated fan website lets you sort and search through the whole codex. Enjoy!

See also Top Posts and All Tags.

Minutes:
Blog:
Year:
Show all filters
6 posts found
Sep 10, 2024
acx
20 min 2,556 words Comments pending
Scott Alexander refutes Freddie deBoer's argument against expecting major events in our lifetimes, presenting a more nuanced approach to estimating such probabilities. Longer summary
Scott Alexander responds to Freddie deBoer's argument against expecting significant events like a singularity or apocalypse in our lifetimes. Alexander critiques deBoer's 'temporal Copernican principle,' arguing that it fails basic sanity checks and misunderstands anthropic reasoning. He presents a more nuanced approach to estimating probabilities of major events, considering factors like technological progress and population distribution. Alexander concludes that while prior probabilities for significant events in one's lifetime are not negligible, they should be updated based on observations and common sense. Shorter summary
Apr 05, 2023
acx
14 min 1,784 words 569 comments 255 likes podcast
Scott Alexander challenges the idea of an 'AI race', comparing AI to other transformative technologies and discussing scenarios where the race concept might apply. Longer summary
Scott Alexander argues against the notion of an 'AI race' between countries, suggesting that most technologies, including potentially AI, are not truly races with clear winners. He compares AI to other transformative technologies like electricity, automobiles, and computers, which didn't significantly alter global power balances. The post explains that the concept of an 'AI race' mainly makes sense in two scenarios: the need to align AI before it becomes potentially destructive, or in a 'hard takeoff' scenario where AI rapidly self-improves. Scott criticizes those who simultaneously dismiss alignment concerns while emphasizing the need to 'win' the AI race. He also discusses post-singularity scenarios, arguing that many current concerns would likely become irrelevant in such a radically transformed world. Shorter summary
Oct 09, 2017
ssc
23 min 2,930 words 507 comments podcast
Scott Alexander criticizes a Boston Review article on futurism for focusing on identity politics rather than substantive future predictions, arguing this approach trivializes important technological and societal developments. Longer summary
Scott Alexander critiques an article from Boston Review about futurism, highlighting five main issues. He argues that the article fails to make real arguments about the future, misunderstands the concept of Singularity, wrongly associates certain technologies with privilege, falsely portrays conflict between different futurist groups, and grossly underestimates the impact of potential future changes. Scott contrasts this with his view of futurism as a serious endeavor to improve the human condition and prepare for potentially massive changes. He expresses frustration that much current discourse about the future focuses on identity politics rather than substantive issues, drawing a parallel with an 18th-century futurist novel that was more concerned with religious prejudice than imagining actual changes. Shorter summary
May 22, 2015
ssc
43 min 5,524 words 517 comments podcast
Scott Alexander provides evidence that many prominent AI researchers are concerned about AI risk, contrary to claims in some popular articles. Longer summary
Scott Alexander responds to articles claiming that AI researchers are not concerned about AI risk by providing a list of prominent AI researchers who have expressed concerns about the potential risks of advanced AI. He argues that there isn't a clear divide between 'skeptics' and 'believers', but rather a general consensus that some preliminary work on AI safety is needed. The post highlights that the main differences lie in the timeline for AI development and when preparations should begin, not whether the risks are real. Shorter summary
Apr 06, 2013
ssc
19 min 2,355 words 28 comments podcast
Scott Alexander discusses Robin Hanson's vision of a future with emulated humans, debating the preservation of human values and the nature of future societal coordination. Longer summary
Scott Alexander recounts a conversation with Robin Hanson about the future of humanity, focusing on Hanson's vision of a Malthusian future with emulated humans. They discuss the potential loss of human values like love in such a future, the concept of anti-predictions, and the ability of future societies to coordinate and solve problems. The dialogue touches on the speed of technological change, the preservation of values, and the potential for cultural variation in a post-human world. Scott challenges some of Hanson's views, particularly on the preservation of human values in a hypercompetitive future. Shorter summary
Apr 05, 2013
ssc
16 min 2,063 words 113 comments podcast
Scott Alexander discusses Robin Hanson's idea of investing charitable donations for later use, exploring psychological resistance and attempting to debunk it with various arguments. Longer summary
Scott Alexander attends a talk on efficient charity and discusses Robin Hanson's controversial ideas about investing charitable donations instead of giving immediately. He explores the psychological resistance to this idea and attempts to debunk it with various arguments, including the declining efficacy of charity over time and the possibility of a technological singularity. Despite initially expecting to conclude that investing donations is a bad idea, his rough calculations suggest it might be beneficial unless there's a high chance of catastrophic events preventing future donation. Shorter summary