The fear of robots taking over the world may seem far-fetched, but with the exponential growth in the field of artificial intelligence (AI) this fearful fantasy is beginning to look more like a reality. In this age of computers, there are many new pioneering technologies dealing with AI. Each of these implementations has their pros and cons, such as being more useful in language comprehension of mathematical functions. The different implementations to be discussed are: knowledge bases, neural networks, evolutionary algorithms, genetic programming, and hierarchical temporal memories (Carey). Of the different implementations currently in development, hierarchical temporal possess the greatest potential for and most possible applications of the knowledge it can gain.
     To further analyze the different types of AI, we have to qualify some definitions first. The AI ratio is the ratio of amount of artificially gained intelligence to the human intelligence that was put into the system to begin with. This ratio is universally used to evaluate an AI systems addition to society. An AI system with a low ratio will not provide a great increase in knowledge, as an extensive amount was already put into the system to begin with. A system with the highest AI ratio possible would be one where the question is laid out for the program, and a terminating function is also defined, but this is all that would be given. This would yield the highest ratio because no extra information is given, any less and there would be no question for the program to solve (Koza, Genetic Programming II 37). Also, when examining an implementation of AI, it is important to look at the relevancy of the data it produces. These are the two strongest factors being looked at in identifying the strongest implementation of AI.
     One implementation of AI is the knowledge base. Knowledge bases allow the user(s) put information and equality statements into a database. The knowledge base then accesses this database of information, which it uses to answer a very specific set of questions. There are many prime examples of knowledge bases, including Deep Blue and Cyc.
     Deep Blue was the first computer program to ever beat a chess grandmaster in true tournament competition. Although it is an astonishing feat, this triumph is not truly the outcome of AI. Deep Blue required thousands of lines of code explaining nearly every possible move the opponent could make, and how to move according to the opponents move. For this reason Deep Blue has a very low AI ratio, practically 1:1. It is a very inefficient system, requiring inordinate amounts of time, to return nearly nothing of gain to human society.
     Cycorp Inc. is a new company who has put a spin on knowledge bases. They have been developing over that past few years the Cyc database. Cyc’s mission is to provide an AI platform that can fundamentally comprehend language. The programmer of Cyc entered in hundreds of dictionary words and versatile relationships between these words such as equality statements into the program. These words and relationships are expressed mainly through predicate calculus. The Cyc application allows users to query different terms in their database, and Cyc supposed to be able to answer questions posed by the user, and provide on-syntactical search results that will boost the productivity of the end user. The method which Cyc uses to identify matches to queried terms is that it uses the predicate calculus and performs proofs to determine a certain type of equality or inequality between the two terms. This new kind of intelligence has provided the first AI implementation with a true comprehension of language (Lenat).
     Cyc has been tested by many research laboratories and has provided many strong examples of the relevance of its data. One query submitted asked for a “happy moment” (Lenat). The results that were returned included a picture with a caption telling the reader that it was a father watching his daughter take her first steps. The result is immediately recognized as relevant to the query at hand, because a human realizes the satisfaction and happiness of seeing your daughter taking her first steps. To a computer thought, there is no way it could have returned this result solely based on syntax. There are no synonyms or exact word matches in the caption, so this goes to show that the computer system has contextually identified this result. Another prime example is a query for strong and risk taking. The results list for this query revealed a picture with a title of Rock Climbing, but no caption associated with it. A human being realizes that for one to rock climb, one must be in good physical shape and must be very strong due to the strenuous climbing. Also, humans know that not anyone goes and climbs cliffs. They understand that it takes some level of risk taking to venture to go rock climbing. These examples are evidence that Cyc has a very strong AI ratio, even though there is a fair amount of human intelligence that is originally injected into the project.
     Knowledge bases have a large potential for what they can do, and they are really the only type of AI which has begun to truly conquer language recognition. They also have a potentially high AI ratio if they are able to fully understand language in the future.
     One drawback of knowledge bases is that they are not efficient or very useful for mathematical calculations; they are much more useful for language, so for this reason, it is very important that development continues for the use of its language abilities. Due to its lack of mathematical usefulness, this
implementation would not receive a large overall potential as the most useful implementation for humans.
     Neural Networks are the next type of AI platform, which tries to emulate a brains function using neurons, and utilizing pathways between them. Currently, neural networks are relatively new in the field of AI. The theory behind neural networks relies on mimicking brain function. The networks take in a single, or multiple inputs to the system, then each neuron performs a calculation on that signal it receives. This calculation has a variable, which is tuned, usually by a learning algorithm to meet a specific type of output for the problem at hand. The process starts by feeding the array of neurons artificial data for which the answer is known, or approximately known. The learning algorithm is then applied to this input data until all the neurons are tuned correctly. This is then verified by a set of truth tables with other known input data. Once quality of the tuned variables is assured, the actual data is run through the system and the output is saved or used in the manner intended (DeClaris 1).
     Current known pros of neural networks is that they run massively parallel calculations so they are very efficient and quick at calculations. Also, they are a non-linear implementation of AI, allowing it to perform some calculations that standard types of AI could not accomplish. A current setback of neural networks is their lack of vision processing. There are some projects being worked on that provide vision capabilities, and there are many small examples available, but none that are on a larger scale that would be applicable to humans at this time (DeClaris 1).
     The negatives presented about neural networks are few and remediable. Neural networks have not begun to take on language comprehension as other implementations, such as the knowledge bases, have. Language comprehension is one of the most difficult tasks currently presentable to AI. Even though the current state of neural networks does not allow for language comprehension, this skill can be gained over time. Although there have been no examples, researchers believe it can be done, since neural networks are modeled after the brain, and the brain is able to understand language (DeClaris 2).
     Neural networks have a very high AI ration because they require virtually no input information other then a truth table, and the data that the user wants calculated. Since the neural networks are extremely efficient at math, and they have the possibility to expand towards language comprehension, they show promise in the field of AI. This, as well as its ability to tackle any problem currently thought available to human calculation, gives this implementation a very high potential for growth. This also will be able to produce extremely relevant results for humans, allowing us to perform nearly any calculation the brain would be able to comprehend. For these reasons, neural networks seem to have the greatest necessity for continual development.
     Evolutionary algorithms are a type of AI based on the Darwinian principal of natural selection. Technically speaking, it is a mathematical program that alters the equation it uses to perform calculations based feedback from a fitness function (Koza, Genetic Programming III 22). The simplified explanation of the inner workings of evolutionary algorithms is that the user specifies a problem set, telling the program that it has a certain number of variables that it is trying to calculate, and what type of variables they are. Then the programmer defines a fitness function that the program uses to evaluate each solution it comes up with. The fitness function returns back a number to the program. Then the program uses this number to see if the particular solution is close to, or far from an acceptable answer. The program generates a population of initial random answers, with fitnesses attached to them. Then, just like in natural selection, more fit answers are more likely to be selected to move on to the next generation through different operations such as asexual reproduction, crossover reproduction, mutation, and 2 other more complicated types of reproduction. Then the new generation gets computed fitnesses. Over time, the program will tend to converge on a certain answer, which is the most fit answer it can find (Koza, Genetic Programming 12).
     Evolutionary algorithms are very useful for mathematical calculations where there are numerous unknown variables. Although this implementation is superior at finding an answer from the smallest amount of data possible, it is very resource intensive. Some problems have been known to take a full week to evaluate an acceptable answer. Even though they are resource intensive, it has an extremely high AI ratio. This implementation, as stated, requires no more information then necessary. Any less and no problem would even be defined.
     Evolutionary algorithms, in conjunction with genetic programming, have actually shown true merit, in the form of human competitive results being derived from programs that were written. Evolutionary algorithms and genetic programming have produced 21 results that mimic the functionality of previously patented ideas, and it has created 2 new patentable inventions (Koza, Genetic Programming IV 7).
     This implementation has definite potential in the realm of mathematical calculations, where no other implementation has currently shown any ability to tackle. Although it has some capabilities other types of AI have yet to display, evolutionary algorithms have not shown the possibility for language comprehension, or visual recognition, which are currently the two most heated topics in AI. For these reasons, evolutionary algorithms are not the most important implementation to be concentrating on out of all the types of AI discussed.
     Genetic programming is fundamentally equivalent to evolutionary algorithms. The way in which they work is completely identical except for the fact that genetic programs, rather then just generating a specific answer to one problem, generate an actual program that can be continually run to answer the same, or varied problems over and over again (Koza, Genetic Programming IV: Video).
     Because it is so similar to evolutionary algorithms, it has an identical set of pros and cons, except that it can create a program that is reusable for different constants being substituted into the original problem, where evolutionary algorithms only generate the one answer for a specific set of constants. Some would think that because genetic programs can do more, evolutionary algorithms should not be developed anymore, but this is not so. Evolutionary algorithms are more efficient at calculating a specific answer, and if only a specific answer is needed, there is no reason to develop a genetic program to do the calculations (Koza, Genetic Programming IV 25). Also, genetic programming is much more difficult in general, and there is more information that needs to be supplied to the program. For these reasons, as well as the other positives stated about the evolutionary algorithms, this implementation should also continue to be developed, but in the end will not yield all the answers to questions human seek.
     Hierarchical temporal memories (HTMs) are easily the most complex implementation of AI. They represent a special case of neural network that also uses linear algebra to represent any data passing through the system to allow for all calculations to work (George and Hawkins 5-6). HTMs have already been able to do complex visual recognition that traditional forms of AI have had extreme difficulty accomplishing. HTMs have the ability to accomplish what neural networks can but are more efficient and are also able to do things such as language much easier. Neural networks would require random tuning of neurons while data about language is being passed through the system. But with HTMs, language only need be digitalized such as in the case of knowledge bases. Then all calculations could be performed through the HTM. Language recognition and comprehension has not occurred yet in HTMs because the language has to be digitalized differently then in the knowledge bases, it must be represented in matrices instead of predicate calculus. A proven benefit to HTMs is their ability to do complex visual recognition. Programs are already available for two dimensional simple object recognition such as ladders and steps; objects with recognizable shapes. HTMs can accomplish all that neural networks can with smaller resources, and it seems that they will be able to accomplish language comprehension quicker and simpler; HTMs seem to be the AI implementation with the most potential and usefulness to human society.
     As of yet, no implementation has definitively established itself as the center of AI. All the implementations currently have problem sets that they are better suited for then any of the other types of AI. Knowledge bases have become the testground for language recognition, neural networks are making small progress, but are spreading to all sectors of AI, evolutionary algorithms are great for mathematical computations with many variables and no initial information, and genetic programming is good for recurring problems with different coefficients, and little other initial information, HTMs are also spreading to all sectors of AI, but are able to make bigger strides then the neural networks seem to be. HTMs have put themselves on top, lining themselves up to be able to conquer all the different types of problems humans may have, and doing so with seemingly the most computational power. HTMs are the AI implementation with the most potentially useful functions to human society in the future.