Is an AI bot a "person"?
As the digital world has spread into our lives, people are often heard to sigh in frustration and say, something like, “Can I just speak to a real person, please.” But in the age of chatbots even the legal world is confused about what a person is.
As this snippet is being written in mid 2024 there is a fascinating court case in Canada about a chatbot. Trained on enormous samples of the English language, chatbots are surprisingly accurate in solving problems, interacting with people, and even writing poems – though the results would not win any prizes.
Air Canada used a chatbot to give airline information to on-line enquirers, but the bot gave badly wrong answers to one man, who lost money because of it, and sued the airline. The airline’s defense was that the man should not have trusted the chatbot. (But how would he have known that?)
Another very interesting defense was that the responses of the chatbot could never be completely predicted, even by its creator, so that the chatbot was actually an independent entity in law - a person in fact - and that person was really responsible for the misinformation, not the airline. So the man should sue the chatbot!
This attracted both amusement and ridicule from legal experts, but they are partly responsible because they have begun declaring unusual objects to be legally “persons”.
The Maori culture in New Zealand has tended to treat the physical and vaguer properties of a river, including its surroundings and inhabitants, as a mystical whole, and as part of this, starting in about 1988, some activists pressed for legal recognition of the river. Finally in March 2017 by decision of the NZ Parliament The WhanganuI River – NZ’s longest navigable river was declared a “person”. This was rather a technical term but the decision clearly stated it was to be treated on a level with human persons. Three days later the same ruling in India was applied to two other historic rivers, the Ganges and Brahmaputra; but this was later overturned in the Indian Supreme Court, so legal experts can differ. One problem of declaring a river to be a person was that the river could be sued if it flooded! If judgment went against it, I forsee difficulties in putting a flooding river in jail!
Originally, in England, sometime between 1066 to 1500, the Middle English period, the word “person” meant a mask for a character in a play, not a living, breathing being. Only late in the period was it applied to real individuals, and by 1444 a collection of people or institutions etc. Even today in law “person” is defined very broadly. A chatbot might well be included in the concept of “person”!
One can imagine a chatbot being sued. It probably could defend itself in court although its arguments might be rather strange - an example of the old joke which says a person who defends himself in court has a fool for a client.
I have just asked ChatGPT (an earlier version), “Are you a person?” It replied,
No, I'm not a person. I'm an artificial intelligence language model created by OpenAI called ChatGPT. I'm here to assist you with any questions or tasks you have. How can I help you today?
So I asked it whether I could actually be another bot talking to it. It said it was increasingly difficult to distinguish between humans and machines, but regardless it was here to engage in conversation and “assist you to the best of my abilities.”
I personally(!) would prefer to say that a fetus is more like a person than a river is. From that it would follow that God must be a Super-Person? Yep, Super, anyhow.