New applications of artificial intelligence technology keep surprising us, but they probably shouldn’t. After winning Jeopardy, IBM’s Watson computer went on to support applications to assist attorneys in building cases as “Ross” in bankruptcy law; advise doctors at one of the world’s foremost cancer treatment centers, Memorial Sloan Kettering; and serve as virtual teaching assistant “Jill” at Georgia Tech, its natural language processing capabilities convincing enough that students in the class were unaware until professor Ashok Goel revealed the nature of their TA after the final exam.
At the Denver Open Coffee Club recently (a local meetup for discussing tech and startup issues), we raised the subject of artificial intelligence (and automation) and humans’ relationship with it. Passions rose a bit as we talked not about the astounding capabilities and myriad applications we could achieve through thoughtful employment of artificial intelligence (well, there was some of that) — but about just how far, ethically, we as a culture were willing to go. It became less about the many people that a class of robot workers will likely soon render unemployable, and the ethical and economic questions that we will be forced to reckon with when that happens, than it was around the very human psychological rejection of decisions and judgements made by a machine. The largest obstacle to a full embrace of artificial intelligence in our culture is not our lack of technological capacity but ourselves, said one of our group. When it comes to some things, humans seem to just want humans.
But still, humans built a sophisticated question-answering machine like Watson. We have many, many questions. Many are questions that we want and feel comfortable with AI answering. But it seems there is a boundary somewhere, something nebulous. A red line, after which the asker says, “I don’t want a machine to answer this. This question needs a human.” We don’t know where this line is and it’s probably different from person to person or even the same person on different days, but has to do with meaty, moral, ethical, judgment questions. Can you teach an AI to be moral? Do you want to, or would we just be doing that because when it comes to these questions, we’re afraid of the answers an AI would offer?
I would argue that it is less about where we want to divide those questions and more about how we define and separate real and artificial intelligence. What I find chilling in this question is the place where, after a lot of contemplation about what a human is, a human itself seems not unlike a machine. We are complex organisms, with mechanics inside that allow us not just to move about the world but to ponder it and our place in it, a sort of miraculous ability, but a mechanical one nonetheless, an ability derived from our cerebral cortex.
I wouldn’t argue that an AI exists today that could conceivably replace all of the cognitive abilities of a human, nor the capacity to balance rational thought with ethical judgment (unless I missed something in the news). In high stakes situations, I would still prefer a person to be in charge. Given time, though, I think it’s inevitable we wrestle more with these questions and place more responsibility on AI, allowing what we create the space to learn and make those judgments. The problem we’ll wrestle with then is whether and when, when given human or near-human abilities in cognition, it will begin to recognize its own self, its place, and begin a human-like sentient experience in its own right. I hope we’ll have confronted the relationship between ourselves, our morality, and our AI by then.