Scott reviews a paper by leading researchers attempting to determine AI consciousness through computational theories, critiques their conflation of access and phenomenal consciousness, and predicts society will inconsistently ascribe consciousness to AIs based on their social roles rather than their underlying architecture.
Longer summary
Scott reviews a new paper by Yoshua Bengio, David Chalmers, and others that attempts to determine whether AI systems are conscious by examining computational theories of consciousness like Recurrent Processing Theory and Global Workspace Theory. The paper finds that current AIs lack the necessary 'something something feedback' mechanisms for consciousness, but future architectures could have them. Scott criticizes the paper for conflating access consciousness (ability to introspect) with phenomenal consciousness (inner experience), and argues that even if AIs satisfy these computational criteria, it's unclear whether they would truly have subjective experience. He predicts a paradox where society will treat some AIs (like companions) as conscious while denying consciousness to functionally identical AIs in other roles (like factory robots), similar to how we treat dogs versus pigs today.
Shorter summary