How to avoid getting lost reading Scott Alexander and his 1500+ blog posts? This unaffiliated fan website lets you sort and search through the whole codex. Enjoy!

See also Top Posts and All Tags.

Minutes:
Blog:
Year:
Show all filters
4 posts found
Sep 10, 2024
acx
20 min 2,556 words Comments pending
Scott Alexander refutes Freddie deBoer's argument against expecting major events in our lifetimes, presenting a more nuanced approach to estimating such probabilities. Longer summary
Scott Alexander responds to Freddie deBoer's argument against expecting significant events like a singularity or apocalypse in our lifetimes. Alexander critiques deBoer's 'temporal Copernican principle,' arguing that it fails basic sanity checks and misunderstands anthropic reasoning. He presents a more nuanced approach to estimating probabilities of major events, considering factors like technological progress and population distribution. Alexander concludes that while prior probabilities for significant events in one's lifetime are not negligible, they should be updated based on observations and common sense. Shorter summary
Aug 27, 2019
ssc
19 min 2,371 words 254 comments podcast
Scott reviews 'Reframing Superintelligence' by Eric Drexler, which proposes future AI as specialized services rather than general agents, contrasting with Nick Bostrom's scenarios. Longer summary
Scott Alexander reviews Eric Drexler's book 'Reframing Superintelligence', which proposes that future AI may develop as a collection of specialized superintelligent services rather than general-purpose agents. The post compares this view to Nick Bostrom's more alarming scenarios in 'Superintelligence'. Scott discusses the potential safety advantages of AI services, their limitations, and some remaining concerns. He reflects on why he didn't consider this perspective earlier and acknowledges the ongoing debate in the AI alignment community about these different models of future AI development. Shorter summary
Dec 17, 2015
ssc
33 min 4,288 words 798 comments podcast
Scott Alexander argues that OpenAI's open-source strategy for AI development could be dangerous, potentially risking human extinction if AI progresses rapidly. Longer summary
Scott Alexander critiques OpenAI's strategy of making AI research open-source, arguing it could be dangerous if AI develops rapidly. He compares it to giving nuclear weapon plans to everyone, potentially leading to catastrophe. The post analyzes the risks and benefits of open AI, discusses the potential for a hard takeoff in AI development, and examines the AI control problem. Scott expresses concern that competition in AI development may be forcing desperate strategies, potentially risking human extinction. Shorter summary
Jul 13, 2014
ssc
18 min 2,250 words 111 comments podcast
Scott explores a dystopian future scenario of hyper-optimized economic productivity, speculating on the emergence of new patterns and forms of life from this 'economic soup'. Longer summary
This post explores a dystopian future scenario based on Nick Bostrom's 'Superintelligence', where a brutal Malthusian competition leads to a world of economic productivity without consciousness or moral significance. Scott describes this future as a 'Disneyland with no children', where everything is optimized for economic productivity, potentially eliminating consciousness itself. He then speculates on the possibility of emergent patterns arising from this hyper-optimized 'economic soup', comparing it to biological systems and Conway's Game of Life. The post ends with musings on the potential for new forms of life to emerge from these patterns, and the possibility of multiple levels of such emergence. Shorter summary