Artificial Intelligence (AI) has been defined by John McCarthy - an American computer and cognitive scientist who is responsible for the coining of the term – as “the science and engineering of making intelligent machines”. Artificial intelligence systems developed up until now are categorised into two types: the symbolic and the sub-symbolic. AI systems now form a key component in a wide variety of technologies such as internet search engines, bank software and in medical diagnosis. This essay will explain the two approaches used in AI and provide examples for each. It will also consider the varying interpretations of ‘intelligence’ in order to answer the following question: Can AI ever equal or surpass human intelligence? The essay will argue that AI can be both more and less than human intelligence. As the Turing award lecturer Reddy has claimed, there will be properties of human intelligence that may not be exhibited in an AI system. Conversely, there will be capabilities of an AI system that are beyond the reach of human intelligence.
One of the two approaches used in AI is the symbolic or rule-based approach. This stores knowledge in the form of verbal rules {IF (A is true) THEN (B is true)}. These rules are usually formed by human experts while some other systems can display a degree of ‘learning’ and can themselves form complex rules. The rules are then used by the machine’s model of reasoning to answer questions asked by users. For example, what disease do I have, given my symptoms? One of the largest areas of applications of this approach is in Expert Systems. AI scientists have made significant progress in solving many day-to-day problems through the use of Expert Systems such as in the detection of possible frauds or in tax and audit planning.
The ultimate effort of many AI scientists is to make computer programs that can solve problems and achieve goals as well as human beings. One can reasonably argue that rule-based or symbolic AI systems have the capabilities of reaching human-level intelligence and potentially surpassing it. Playing chess has long been regarded as a paradigm of human intelligent behaviour, a game that requires a great degree of ‘logical’ intelligence. This is an area in which symbolic AI systems have had great success. In 1997, for example, IBM’s Deep Blue chess computer beat Kasparov. Another such example can be given by Kurzweil’s study found in his book “The Age of Spiritual Machines”. The human brain performs 200 million billion calculations per second. He showed that in year 2025 an “ordinary” personal computer can achieve human brain capacity by performing the same number of calculations as a human brain. If this can be regarded as a measure of the “mathematical/logical” intelligence of a human being, then perhaps Artificial Intelligence can someday equal and even outmatch human intelligence.
A major criticism of symbolic systems has been that the rules they use are in some cases too precise failing to capture the peculiarities and unpredictability of human life. Conversely, sub-symbolic systems do not encode ‘knowledge’ as a set of verbal rules – instead, their capability comes through the activation of connections and junctions within a network. Sub-symbolic systems, the best known of which are neural networks, have widespread applications in human life: some of them include face or emotions recognition. For example, Rosalind Picard, a Professor at MIT, has designed sensors that allow the computer to understand how human beings feel by measuring indicators of anger, grief, joy etc. In finance, neural networks are used for predicting the movement of financial markets and identifying spending patterns to indicate potential credit card frauds.
AI scientists have made significant headway in solving fundamental problems in modelling human reasoning and trying to encode ‘common sense.’ Yet, many claim that, in many ways, AI systems are infinitely less intelligent than human beings. This is because there are human properties that might be impossible for AI systems to reach. Indeed, machines can’t have emotions. Also, machines cannot be creative since they do only what they are programmed to do. Yet, the picture is not back and white. AI researchers in the Netherlands have created the “Emotional Cat Robot.” They don’t really believe that computers can have emotions, but they see that emotions have a certain function in human reasoning. By bestowing intelligent agents with similar emotions, researchers hope that machines can then emulate this humanlike reasoning. Similarly, there are already a number of projects in artificial intelligence that try to recreate creativity in computers. Cohen has spent his whole career designing a program called Aaron which creates original works of art. Some say that this amounts to creativity. Cohen yet claims that Aaron is creative with a small “c” but not creative with a big “C”.
What has been claimed just above summarises the main argument of this essay: If people continue to assume that there will forever be a core of human attributes, like creativity and emotions, which will never be taken on board by machines, then AI will never equal human intelligence. AI will instead be both more and less than human intelligence. More importantly, however, it is clear that the more researchers discover how the human brain achieves intelligence the more they will be able to use the same computational architecture and algorithms in computers and the better they will model human thought. The boundary of what can or cannot be done by Artificial Intelligence systems will continue to change with time.
Saturday, 8 May 2010
Saturday, 1 May 2010
Subscribe to:
Posts (Atom)
