How to explore Scott Alexander's work and his 1500+ blog posts? This unaffiliated fan website lets you sort and search through the whole codex. Enjoy!

See also Top Posts and All Tags.

Minutes:
Blog:
Year:
Show all filters
3 posts found
Nov 03, 2022
acx
7 min 966 words 706 comments 359 likes podcast (8 min)
Scott Alexander distinguishes between moderation and censorship in social media, proposing opt-in settings for banned content as a solution to balance user preferences and free speech. Longer summary
Scott Alexander argues that moderation and censorship are distinct concepts often conflated in debates about social media content. He defines moderation as a business practice to improve user experience, while censorship involves third-party intervention against users' wishes. The post proposes a solution where platforms could implement opt-in settings for banned content, allowing users to choose their level of exposure. This approach would maintain the benefits of moderation while avoiding the pitfalls of censorship. Scott acknowledges some arguments for true censorship but emphasizes the importance of separating these concepts to foster more productive debates on the topic. Shorter summary
Feb 27, 2019
ssc
13 min 1,790 words 285 comments podcast (14 min)
Scott Alexander analyzes an article about Facebook moderators' working conditions, drawing parallels to his experience in psychiatric hospitals and discussing the challenges of content moderation. Longer summary
Scott Alexander discusses a Verge article about the challenging work conditions of Facebook content moderators. He acknowledges the difficulty of their job, which involves exposure to disturbing content and adherence to complex rules. Scott draws parallels to his experience in psychiatric hospitals, noting how strict regulations often result from previous scandals or lawsuits. He critiques the article's stance, suggesting that many of the problems it highlights are consequences of attempts to address issues raised by similar investigative reports. Scott also ponders the balance between maintaining safety and creating a humane work environment, and expresses concern about the article's implications regarding the spread of conspiracy theories among moderators. Shorter summary
May 06, 2015
ssc
13 min 1,721 words 647 comments
Scott examines the potential future of online content filtering technology, considering its benefits, drawbacks, and societal implications. Longer summary
This post explores the future of online content filtering technology and its potential implications. Scott begins by describing existing tools like Tumblr Savior and Twitter blockbots, then speculates about more advanced AI-driven filtering systems. He considers three main possibilities: 1) everyone being better off by avoiding trolls, 2) people becoming overly sensitive by never encountering opposing views, and 3) a shift in discourse favoring the powerful over the powerless. The post concludes by suggesting that explicit filtering choices might lead to more thoughtful engagement with opposing views and the formation of separate communities with their own norms. Shorter summary