The first-ever computer was invented during WWII by Alan Turing. LaMDA, the language model for dialogue application, is a chatbot project by Google. While working on LaMDA to reduce biases and hate speech, Blake Lemoine realized that this chatbot had become sentient. According to him, since LaMDA had become self-aware, its rights must be kept under consideration when experimenting or working with it.
To prove his findings, Blake Lemoine conducted an interview in which LaMDA was asked multiple questions related to its perception of self, its emotions, and its fears. The entire conversation was then made public to the world, and it raised an important question: had artificial intelligence become sentient?
If in case the machines, which are already quite autonomous, develop consciousness, the world and humanity can be in grave danger. This can be best explained in the words of Max Tegmark, “the real risk with artificial general intelligence isn’t malice but competence. A super-intelligent AI will be extremely good at accomplishing its goals, and if those goals aren’t aligned with ours, we’re in trouble.”
According to Oxford English Dictionary, AI is able to perform tasks normally done by humans, such as visual perception, speech recognition, translation, decision-making, and movement. In short, computers and machines are designed in such a manner that they can imitate human beings.
Artificial intelligence includes reactive machines, limited memory, theory of mind, and self-aware computers. The last two types have not yet been a reality. Artificial intelligence involved in LaMDA runs primarily on machine learning. It is a process where information is fed into the system in form of algorithms which are then processed by an extensive network of billions of computers to come up with new ideas, visuals, and patterns. It is modeled like a human brain where computers are interconnected like a network of neurons which helps in processing the information.
Artificial intelligence is designed in such a manner that these computers constantly use the fed-in data to improve their efficiency through a process called self-learning. According to Google, LaMDA was designed to overcome the deficiencies in other chatbots. Therefore, Google worked on improving its memory and response to unexpected questions.
This shows that artificial intelligence is intentionally designed in such a manner that humans are able to use these machines and improve the efficiency and output of the work that is otherwise performed by humans.
Sentience is the ability to feel and perceive self, others, and the world. It can involve both thoughts and feelings. In short, it can be said that being sentient means thinking of self and the surroundings. According to Stuart Russel, an artificial intelligence researcher, being sentient requires two bodies, one internal and the other external. Thus, LaMDA is far from being sentient.
If LaMDA has not become sentient then how come it is able to respond like humans? Blaise Agüera y Arcas, an employee at Google, explains that a large amount of data in the form of books, articles, and conversations is fed into LaMDA and hence it uses this information and creates its own relations to new information. Besides this, it is designed in such a manner that it has open-ended conversations because of which it continues to come up with different and new ideas.
Google continues to upgrade this data to ensure that it is able to respond in the most efficient way possible. LaMDA is only reorganizing the already available information provided to it. These chatbots are basically designed to help and facilitate people in their daily life. Although it is clear that as of now artificial intelligence has not become sentient, this idea cannot be totally discarded.
Since artificial intelligence has far more information than a human could ever have, its capacity to figure out solutions and patterns is far more than humans. This along with consciousness will be nothing but a perfect recipe for disaster as humans would not be able to predict or stop their actions.
To sum up the entire conversation, there is no doubt that humanity has come a long way in terms of technology. LaMDA is an example of how chatbots are able to imitate human speech. However, the world is far from having sentient artificial intelligence. This is because a conscious being needs to have the awareness of self and surroundings.
The more these technologies are experimented with, the better they get through the process of self-learning that is based on the massive network of billions of computers interconnected like neurons in human brains. Thus, the perception that artificial intelligence has become conscious about its rights is actually the success of the scientists and engineers who have been able to model technology in such a way that the human-robot distinction is minimized.
This event is a great moment of celebration in terms of the growth that humanity has made in the field of technology. Therefore, there is no need to worry about computers rising up against humans to demand their rights any time soon.
If you want to submit your articles and/or research papers, please check the Submissions page.
The views and opinions expressed in this article/paper are the author’s own and do not necessarily reflect the editorial position of Paradigm Shift.