Navigating the Impact of AI on Human Interaction: From Efficiency to Dehumanization


With the advent of ChatGPT and GPT-4, people have been utilizing these tools to accomplish various tasks, especially in academia. It seems that these tools have been found to be immensely useful for academic tasks, such as identifying key researchers in the field, summarizing or even synthesizing current research, or providing editing services to researchers writing articles. Critical scholars have also discussed the potential bias embedded in AI tools and how these human-related biases can be normalized and held as truth because of public opinion about AI being the “know all” and “know better” superior tool that is based on science.

I would rather instead discuss the issue of using AI and how it may change interaction patterns between humans. Many people, myself included, use AI to complete many different tasks, using prompts like “Can you help me translate the following into English?” “Can you tell me who the prominent researchers are in the field of global citizenship education?” or “Can you help to edit the texts below, clearing any grammatical mistakes in them?” In just a few seconds, the powerful AI tools would be able to come up with a response that is useful to us. I usually spend a few additional minutes proofreading the translation or making sure the AI-edited text does not distort the original meaning that I would like to express. The whole experience of using AI is useful, interesting, and efficient. I do not have to care about what the AI is doing now, whether he or she is engaged in other things, attending to his or her personal needs, or whether he or she is having family time. You see my point? I would like to argue that one of the major advantages of using AI chatbots is that we can treat them as machines, and of course, we do not have to care about their needs, emotions, schedules, or subjectivity. I would like to argue further that these experiences can, inadvertently, if we do not reflect on them critically, shape our interactions with other human beings and potentially result in dehumanizing patterns of action, especially towards people who are vulnerable to domination.

Think about a boss who has used to the experience of using AI to complete his tasks “efficiently and conveniently”, and the experience of getting used to not having to care about the other party who completes the work because it is an AI. It is not only natural but quite likely that the boss would compare the work of AI with the work of his or her subordinates and find the letter to be more demanding and less efficient. It is also possible that the boss may also change his modes of communication from between two human beings with care and respect to the cold and emotionless communication style of an AI, because AI does not have emotions. The “can you do this and that for me” communication style can find its way into the everyday communication between superintendent and subordinate in various settings, with the superintendent’s expectation that the subordinate complete the work in seconds, just like an AI tool.

Some would even argue that, sure, this communication pattern may be the driving force behind another round of human evolution into a more “advanced form” free of human needs and emotions. But before that, I would argue it is still worth it and necessary to consider our emotions and needs as humans in communications and be critical and reflective of the effect that indulging in AI could have on us.

When we talk to another human being, can we still talk in a way that demonstrates our love, care, respect, and mindfulness of the needs, dignity, and emotions of another fellow human being just like ourselves?


Leave a Reply

Your email address will not be published. Required fields are marked *