Member-only story
Does It Matter if AI Is Sentient?
The hard problem and the even harder problem.
Almost five years ago in this blog, I wrote that I had made my peace with the possibility — a near inevitability, some argue — that super-intelligent machines will eventually replace human beings atop Earth’s food chain. (“Computer Says No — or Why I Am Fine with the Robot Uprising” — January 30, 2019.)
Since then, the odds of that outcome seem only to have increased, as has the extent of panicked discussion about it in the general culture.
I’m not saying I’m looking forward to human obsolescence, or that we shouldn’t do whatever we can to try to forestall it; I’m already annoyed by the self-checkout at the supermarket telling me to remove my unscanned items from the fucking bagging area. I’m just saying that it may be unstoppable, and we might want to resign ourselves to the replacement of carbon-based life by silicon-based life as a natural step in the evolution of the planet.
That’s how I can sleep at night. (That and a nightly cocktail of Everclear grain alcohol and Hawaiian Punch called a Waimea Closeout.)
For me, a given in that scenario has always been that along the way these super-intelligent machines will have achieved human-like “consciousness” as it is conventionally defined. But lately I’ve begun to wonder if that will be the…