The world is at a crossroads on its use of Artificial Intelligence (AI) with our use of Chatbots exponentially increasing every year. What are the overall effects of turning over more of our work to Chatbots acting as our “assistants?”
There has been much recent talk about the perils of the use of AI in virtually everything we touch. From our phones to our cars, and everything in between, AI is an integral part of our existence. Prominent people, like Elon Musk and Stephen Hawking, warned about the potential for machines to take over and cause havoc in the lives and very existence of humans. Hollywood has made untold billions painting doom-and-gloom scenarios about AI and robots. But what is the true reality? We continually push to create increasingly intelligent Chatbots that seemingly learn, think, and reason like humans. Most attempts fail. Given the recent “issues” with AI Chatbots like ChatGPT®we enter a brave new world of interactions with AI-like entities. The purpose of this blog is to provide a forum discuss the issues associated with the proliferation AI, and, of particular interest, of online Chatbots and their effects on society and human relations, both good and bad.
How do people perceive and respond to AI and how do/will artificially intelligent avatars perceive humans? It’s clear Chatbots are here to stay. Why? Is it convenience for companies or is there a human desire to interface with them. Bank of America has its own Chatbot that over 10,000,000 customers have downloaded and use. So then the question is not will people accept the use of Chatbots. The question also is not whether people will use Chatbots? The question we need to answer is, “Do people feel the need for Chatbots?”. Do/can people trust Chatbots?
Historically, the purpose of AI-enabled systems has been to perform services on behalf of humans. Hence, to help define optimal human-Chatbot interactions, we must look to the characteristics of human interactive behavior. Human communications with other humans fundamentally comprise trust and knowledge of another’s abilities and limitations.
In short, it’s not possible to have an interaction between two human entities without there being some level of expectation of the interaction. Let’s consider a simpler example of human interaction with animals. Humans, for example, cannot completely predict an animal’s behavior. However, it is still important to know how the animal will typically behave to predict the proper interactive response (e.g., play, commands). Understanding the animal’s abilities and limitations reduces frustrations of trying to meet a goal (e.g., taming a lion). Knowing the abilities of the animal changes our expectations. Humans can accommodate for limitations when they know about them. For instance, we wouldn’t ask the Bank of America Chatbot how to make cake. Understanding the expectations, abilities, and limitations of a Chatbot as well as the cognitively designed understanding of the Chatbots’ expectations, abilities, and limitations of humans, is vital to efficient, and useful communication. Communication is much more than a mere working relationship. It is both a process and an outcome. The process is a coming together to work on a problem while understanding that each other has influence on the other. The collaborative outcome is a solution where all parties can agree on the final solution. Typically, communication and collaboration happen because an individual cannot accomplish the same goal alone. It is more than an association relationship it is more like a partnership.
Likely, many of the same things as previously discussed; a sense of predictability, safety, reliability, trust, knowledge, understanding, and accommodation (to name a few). We propose that everything communicating with humans does not necessarily need to be human-like but as a minimum a need for some essential characteristics. Hence, it follows that some of the useful characteristics might be the ones that keep humans committed to the communication. Who will tolerate the constant attack of a lion or a laptop that freezes? Each will eventually be regarded as untrustworthy and would most likely be replaced.
Current human-Chatbot interaction technology and design has developed from leader/follower type interactions toward more collaborative. Some have described a model where the Chatbot is able to consider human intentions and operate without communication. It is important to understand how Chatbots can build beliefs about human intentions by observing, collecting, and perceiving human behavior. Although the current state-of-the-art Chatbots only perform seemingly simple tasks, current research, and advances like ChatGPT→ show further promise for human-Chatbot communication that is much more advanced than in the previous leader/follower human-Chatbot paradigm.
The question is not “are they here to stay?”, they are. The question we should be looking at is: “Do Chatbots fill a necessary role in modern society?”
Dr. James A. Crowder is a Systems Fellow with CAES Advanced Program Development (APD). He holds a PhD in Electrical Engineering, an MS in Applied Math, an MS in Electrical Engineering, and a BS in Electrical Engineering. In his 30 years of experience in Artificial Intelligence, Machine Learning, Fuzzy Systems, and Genetic Algorithms, he has written and published 7 textbooks on Artificial Intelligence and Systems Engineering with Springer, and his work has been written up in Popular Science, Tech Crunch, and others.