How to explore Scott Alexander's work and his 1500+ blog posts? This unaffiliated fan website lets you sort and search through the whole codex. Enjoy!

See also Top Posts and All Tags.

Minutes:
Blog:
Year:
Show all filters
17 posts found
Sep 18, 2024
acx
19 min 2,577 words 612 comments 325 likes podcast (18 min)
Scott Alexander examines how AI achievements, once considered markers of true intelligence or danger, are often dismissed as unimpressive, potentially leading to concerning AI behaviors being normalized. Longer summary
Scott Alexander discusses recent developments in AI, focusing on two AI systems: Sakana, an 'AI scientist' that can write computer science papers, and Strawberry, an AI that demonstrated hacking abilities. He uses these examples to explore the broader theme of how our perception of AI intelligence and danger has evolved. The post argues that as AI achieves various milestones once thought to indicate true intelligence or danger, humans tend to dismiss these achievements as unimpressive or non-threatening. This pattern leads to a situation where potentially concerning AI behaviors might be normalized and not taken seriously as indicators of real risk. Shorter summary
Sep 10, 2024
acx
19 min 2,556 words 435 comments 205 likes podcast (15 min)
Scott Alexander refutes Freddie deBoer's argument against expecting major events in our lifetimes, presenting a more nuanced approach to estimating such probabilities. Longer summary
Scott Alexander responds to Freddie deBoer's argument against expecting significant events like a singularity or apocalypse in our lifetimes. Alexander critiques deBoer's 'temporal Copernican principle,' arguing that it fails basic sanity checks and misunderstands anthropic reasoning. He presents a more nuanced approach to estimating probabilities of major events, considering factors like technological progress and population distribution. Alexander concludes that while prior probabilities for significant events in one's lifetime are not negligible, they should be updated based on observations and common sense. Shorter summary
Jul 03, 2023
acx
31 min 4,327 words 400 comments 134 likes podcast (26 min)
Scott Alexander discusses various scenarios of AI takeover based on the Compute-Centric Framework, exploring gradual power shifts and potential conflicts between humans and AI factions. Longer summary
Scott Alexander explores various scenarios of AI takeover based on the Compute-Centric Framework (CCF) report, which predicts a continuous but fast AI takeoff. He presents three main scenarios: a 'good ending' where AI remains aligned and beneficial, a scenario where AI is slightly misaligned but humans survive, and a more pessimistic scenario comparing human-AI relations to those between Native Americans and European settlers. The post also includes mini-scenarios discussing concepts like AutoGPT, AI amnesty, company factions, and attempts to halt AI progress. The scenarios differ from fast takeoff predictions, emphasizing gradual power shifts and potential factional conflicts between humans and various AI groups. Shorter summary
Apr 05, 2023
acx
13 min 1,784 words 569 comments 255 likes podcast (12 min)
Scott Alexander challenges the idea of an 'AI race', comparing AI to other transformative technologies and discussing scenarios where the race concept might apply. Longer summary
Scott Alexander argues against the notion of an 'AI race' between countries, suggesting that most technologies, including potentially AI, are not truly races with clear winners. He compares AI to other transformative technologies like electricity, automobiles, and computers, which didn't significantly alter global power balances. The post explains that the concept of an 'AI race' mainly makes sense in two scenarios: the need to align AI before it becomes potentially destructive, or in a 'hard takeoff' scenario where AI rapidly self-improves. Scott criticizes those who simultaneously dismiss alignment concerns while emphasizing the need to 'win' the AI race. He also discusses post-singularity scenarios, arguing that many current concerns would likely become irrelevant in such a radically transformed world. Shorter summary
Mar 07, 2023
acx
11 min 1,425 words 600 comments 178 likes podcast (9 min)
Scott Alexander uses Kelly betting to argue why AI development, unlike other technologies, poses too great a risk to civilization to pursue aggressively. Longer summary
Scott Alexander responds to Scott Aaronson's argument for being less hostile to AI development. While agreeing with Aaronson's points about nuclear power and other technologies where excessive caution caused harm, Alexander argues that AI is different. He uses the concept of Kelly betting from finance to explain why: even with good bets, you shouldn't risk everything at once. Alexander contends that while technology is generally a great bet, AI development risks 'betting everything' on civilization's future. He concludes that while some AI development is necessary, we must treat existential risks differently than other technological risks. Shorter summary
Dec 08, 2022
acx
27 min 3,733 words 911 comments 319 likes podcast (23 min)
Scott Alexander argues for a nuanced view of cryptocurrency, highlighting its legitimate uses and potential benefits despite common criticisms. Longer summary
Scott Alexander explains why he is 'less than infinitely hostile' to cryptocurrency, despite widespread criticism. He argues that crypto has clear use cases in countries with weak banking systems, that major crypto projects are rarely outright scams, and that crypto provides valuable insurance against authoritarianism by enabling financial transactions when governments try to restrict them. Scott also contends that while the crypto financial system may be inferior to traditional finance in many ways, its decentralized nature gives it unique advantages. He concludes by suggesting that crypto's potential utility shouldn't be dismissed, even if one doesn't personally need it. Shorter summary
Jul 13, 2022
acx
42 min 5,840 words 449 comments 246 likes podcast (38 min)
Scott reviews a biography of John von Neumann, exploring the mathematician's life, genius, and views on existential risk from technology. Longer summary
This post reviews 'The Man From The Future', a biography of John von Neumann by Ananyo Bhattacharya. It covers von Neumann's early life and education in Hungary, his extraordinary intellectual abilities, his work on various scientific fields, and his views on existential risks from technology. The review explores theories about why so many geniuses emerged from Hungary in the early 20th century, details von Neumann's personality and social skills, and discusses his controversial views on nuclear war. It ends with von Neumann's thoughts on how humanity might survive the dangers of rapid technological progress. Shorter summary
Oct 04, 2021
acx
71 min 9,901 words 741 comments 79 likes podcast (76 min)
Scott Alexander discusses reader comments on why modern architecture differs from older styles, exploring economic, cultural, and artistic explanations. Longer summary
Scott Alexander summarizes and responds to comments on his previous post about modern architecture. The comments cover various theories for why modern architecture looks different from older styles, including economic factors, changes in artistic tastes, cultural shifts, and technological developments. Scott engages with these ideas, sometimes agreeing and sometimes disagreeing, while exploring the broader implications for art, culture, and society. Shorter summary
Jun 04, 2021
acx
23 min 3,124 words 547 comments 66 likes podcast (23 min)
A review of 'Where's My Flying Car?' by J. Storrs Hall, exploring the causes of technological stagnation and the potential for future progress in flying cars, nuclear energy, and nanotechnology. Longer summary
This book review discusses J. Storrs Hall's 'Where's My Flying Car?', which explores the causes of the Great Stagnation since the 1970s. Hall argues that the stagnation was caused by flatlining energy usage, stemming from a failure to adopt nuclear energy due to excessive regulation driven by 'green fundamentalism'. The review covers Hall's analysis of flying cars, nuclear power, and nanotechnology, detailing how regulation and public funding have hindered progress in these areas. It also touches on Hall's critique of government-funded science and his vision for future technological advancements. Shorter summary
Dec 23, 2019
ssc
31 min 4,312 words 71 comments podcast (29 min)
The post argues that automation and AI are unlikely to cause a sustained economic crisis, as new jobs will be created to replace those automated, though the benefits may primarily go to capital owners. Longer summary
This post discusses the potential economic impact of automation and AI, addressing concerns about job displacement and economic crisis. The authors argue that while automation will continue to change the job market, it is unlikely to lead to a sustained economic crisis in the foreseeable future. They examine historical trends in employment, current technological capabilities, and economic theories to support their argument. The post concludes that new jobs will continue to be created as old ones are automated, maintaining overall employment levels, though the benefits of automation may flow primarily to capital owners. Shorter summary
Apr 22, 2019
ssc
27 min 3,673 words 256 comments podcast (28 min)
Scott examines hyperbolic growth models in population and economics, their apparent cancellation around 1960, and speculates on AI's potential to restart such growth. Longer summary
This post explores the concept of hyperbolic growth and its implications for technological and economic progress. Scott begins by discussing Heinz von Foerster's model of population growth, which predicted infinite population by 2026. He then applies this concept to economic growth, showing how it seemed to be on a hyperbolic trajectory until around 1960. The post examines why this growth pattern stopped, linking it to population growth trends. Scott also discusses the Industrial Revolution's role in this model and how it didn't significantly alter the overall growth trajectory. Finally, he speculates on the potential for AI to restart hyperbolic growth by providing a new way to convert economic resources into research capacity. Shorter summary
Scott Alexander explores how straight-line trends on graphs might mask the true impact of interventions, using examples like the Clean Air Act and Moore's Law to illustrate the complexity of interpreting such data. Longer summary
Scott Alexander discusses the difficulty of interpreting trend lines on graphs, particularly when evaluating the effectiveness of interventions or policies. He uses examples like the Clean Air Act, OSHA's impact on workplace safety, and Moore's Law to illustrate how straight-line trends can persist despite significant interventions or technological advancements. The post suggests that these trends might be maintained by control systems, where various factors adjust to keep the trend consistent. This perspective complicates the assessment of policy effectiveness and technological impact, as their effects might be visible in other areas rather than directly on the graph. The author expresses uncertainty about how to distinguish between scenarios where interventions truly don't matter and those where they're part of a complex control system. Shorter summary
Jul 08, 2017
ssc
14 min 1,903 words 394 comments
Scott criticizes an article downplaying AI risks in favor of mundane technologies, arguing this represents misplaced caution given AI's potential existential threat. Longer summary
Scott Alexander critiques a Financial Times article that argues simple technologies like barbed wire are often more transformative than complex ones like AI. While agreeing that mundane innovations can be important, Scott argues this shouldn't dismiss concerns about AI risks. He introduces the concept of local vs. global caution, suggesting that dismissing AI risks as unlikely is the wrong kind of caution given the potential stakes. He points out the severe underfunding of AI safety research compared to trivial pursuits, arguing that society's apathy towards AI risks is not cautious skepticism but dangerous insanity. Shorter summary
Apr 01, 2017
ssc
16 min 2,180 words 140 comments
A fictional G.K. Chesterton essay defends AI risk concerns against criticisms, arguing that seemingly fantastical ideas often become reality and that contemplating the infinite leads to practical progress. Longer summary
Scott Alexander presents a fictional essay in the style of G.K. Chesterton, responding to criticisms of AI risk concerns. The essay argues that dismissing AI risk as fantastical is shortsighted, drawing parallels to historical skepticism of now-realized technological advancements. It refutes arguments that AI risk believers neglect real-world problems, citing examples of their charitable work. The piece emphasizes the importance of contemplating the infinite for driving progress and solving practical problems, suggesting that AI, like other seemingly fantastical ideas, may well become reality. Shorter summary
Jul 21, 2014
ssc
14 min 1,830 words 157 comments
Scott Alexander refutes claims of American decline based on skyscraper construction, presenting data showing continued growth and recent boom in supertall buildings. Longer summary
This post debunks the claim that a decline in American skyscraper construction indicates a decline in American civilization. Scott presents data showing that America's capacity to build skyscrapers has not decreased, and in fact, there has been a recent boom in supertall skyscraper construction. He provides graphs showing the height of the tallest skyscrapers over time, the number of supertall skyscrapers, and the cost per foot of skyscraper construction. The post also points out that the period from 1940 to 1970, often considered a time of great technological progress, actually saw a decline in skyscraper construction, suggesting that skyscrapers are not a good indicator of technological or societal progress. Shorter summary
Jul 21, 2014
ssc
11 min 1,526 words 206 comments
Scott Alexander argues that real technological progress is driven by usefulness and profitability, not the coolness factor often seen in futuristic predictions. Longer summary
Scott Alexander critiques a certain strain of futurology that predicts impressive but impractical technological advancements. He argues that real technological progress is driven by usefulness and profitability, not coolness. The post begins by listing numerous technological advancements from 1969 to 2014, then transitions to discussing why certain sci-fi predictions haven't materialized. Scott explains that space colonization, undersea domes, and massive arcologies aren't practical or necessary given current circumstances. He concludes that the lack of moon missions since 1969 is due to a lack of compelling incentives, not technological stagnation. Shorter summary
Mar 07, 2013
ssc
29 min 3,942 words 174 comments
Scott argues that even if past cultures were superior, restoring them is impossible because cultures evolve to fit their technological conditions, which have changed dramatically. Longer summary
This post argues against Reactionary ideas, even if one grants their assumptions about the superiority of past cultures. The main points are: 1) Historical changes are driven by technological progress, not individual actors. 2) Cultures evolve to adapt to their technological conditions. 3) Past cultures were adapted to past conditions, not current ones, so restoring them wouldn't work. 4) Many negative aspects of modern society are due to technological changes, not political ones. Scott uses analogies like computer operating systems and puppets to illustrate these ideas. He concludes by outlining possible counterarguments Reactionaries could make to save their position. Shorter summary