Can a machine be intelligent? R. Ramanujam, Azim Premji University, Bangalore This is the first article of a series on Artificial Intelligence. The term Artificial Intelligence (AI) has caused a big buzz for some years now. AI is said to be the next big thing after the Internet, in its ability to change the way people live. There are even fears that AI systems and robots may take control over human beings in the future. Is this for real? These are all computer programs, how can they be intelligent? Is all this talk just hype, advertisements by companies to sell new products? For some people, the term AI seems to evoke distrust, disgust, awe, fear, and other negative emotions. Other people ask in wonder, is AI for real? Is it really possible to build machines that are intelligent, explore new frontiers of technology? There is a fundamental question here: can a machine be intelligent, in principle? We tend to think of intelligence as an essentially human trait. Many animals show signs of intelligence, but as far as we know, only humans show tremendous depths of intelligence. Pause for a moment, and think: what is your mental image of a thinking robot? (Picture 1 of a robot against a blackboard full of equations.) Is it something industrial? (Picture 2 of robots on assembly line.) Is it a cute robot dancing, playing football? (Picture.) Or a very human like creature? (Picture 3 from the film Her.) The Association for the Advancement of Artificial Intelligence, an international forum of scientists working in AI, defines AI as: “the scientific *understanding* of the mechanisms underlying thought and intelligent behaviour and their embodiment in machines.” One part of this is as about understanding how people think and how they behave intelligently, and study the mechanisms that make this happen. All this seems to be a fancy way of saying that we want to study the human brain! (Picture 4) The other part of this is embodiment in machines, and this is the core agenda of AI: understand thought and intelligence and build machines that behave as if they can think and act intelligently. Is this about neuroscience and psychology then? Neurologists work on the brain and their aim is principally to treat people with a variety of neurological problems. Psychologists study how people think and behave, they try to treat people with a variety of disorders. Is AI then trying to build computer programs that do similar things? Not quite. Some AI scientists do work with the idea of studying the human brain and using that learning to create computer programs that behave similarly. But this is not the predominant style of AI research. Mostly, they hold the view that the agenda is only to create computer programs which act in a way that humans consider to be intelligent, and their internal working may have nothing to do with the human brain. (In other words, AI may not take us any further in scientifically understanding the big mystery of how the human brain works, and they couldn't care less!) This is interesting. We want a creature, made up of material that does not necessarily use carbon and hydrogen, not the product of evolution, that does not have the senses that human beings have, and yet "behave intelligently"? Why would anyone consider this even possible? Intriguingly, this idea can be traced all the way back to 1950, before modern computers even existed. It was proposed by Alan Turing (Picture 5), a British mathematician, in 1950 in a scientific paper titled, "Computing machinery and intelligence". Electronics didn’t exist then, electromechanical computers were being built. Turing asked, “Can a machine think?” He said it is difficult to answer such a question. Instead, he proposed that we try to answer a (seemingly) simpler question. He proposed the Imitation Game (Picture 6 of Hollywood film): can we tell if a participant in a conversation is a machine and not a human? We have two rooms, a computer in one of them and a human being in another. A human interrogator, who does not know who is in which room, conducts a conversation with both of them, for however long s/he wishes, asking questions as s/he likes. If at the end of whatever time chosen by the interrogator, s/he cannot tell which room the computer is in, then the computer has won the Imitation Game. (Picture 7 of schematic) Is it just a matter of human-like behaviour, "just" imitating humans, then? Yes, but imitating so well that an intelligent human is fooled is the key capability. Turing's idea is that then it would not matter if that machine were made of carbon or metal, this would be a sufficient and useful definition of intelligent behaviour. We do consider conversations to be an essentially human characteristic. (This is not entirely correct, since scientists have shown that birds, for instance, do carry out long "conversations". But Turing was writing in 1950.) Turing's test was a proposal of great daring when it appeared. It is an objective test of machine intelligence, and anticipated the major objections to AI that came up during the next 50 years. Turing suggested what would become the major components of AI until this day: knowledge, reasoning, language comprehension and use, and machine learning. Alan Turing also predicted that by 2000, a machine might have about a 30% chance of fooling a human for about 5 minutes! He was dramatically wrong in two ways: by the early 1970's Feigenbaum had built ELIZA, a computer program that talked like a psychiatrist and could fool many humans for longer than 10 minutes. But by and large, many could immediately tell it was a computer program. By 2000, we did not have any machines that had even a 30% of chance of fooling a scientist for about 5 minutes. Move forward to 2022 and we have software like Chat-GPT that can write essays, poems and conduct long conversations with human beings. (Picture 8 of a Chat-GPT snapshot.) Has AI arrived, then? Do we have machines that can pass the Turing test? Not really. When you work with GPT variants, it quickly becomes clear that it does not understand what it is saying. While it seems remarkably intelligent in some ways, it also looks to be remarkably stupid in some other ways. It is clear that we are able to build machines that show some symptoms of intelligent behaviour. Is it then only a matter of advancement of science and technology before we can build machines that are truly intelligent? The dominant view among AI researchers seems to be "yes", but there is a small group that dissents, considers that there are "complexity" barriers to achieving "general" AI. Mostly, AI researchers are content to build some systems that can learn, some that converse, some that attempt to mimic human vision, some that can play games, some that can do planning, some that can prove mathematical theorems. They do not care for any grand integration into one humanoid robot, they only want to build useful systems. But of course, the question remains. We have already listed the major components of AI research: machine learning, robotics, vision, natural language understanding, game playing, planning, theorem proving. These systems, coupled with the tremendous power of data analysis and search engines built during the last two decades, have shown potential for an entirely new class of systems that seem set to transform the world.