Digging Out of the AI Winter


In the early 1960s, when the first person fathomed the idea of artificial intelligence, it was nothing but a vision. It was a vision that seemed more than possible given the recent leaps in technological insight. Developments such as transistors, microprocessors, high-level programming languages, and many more had put the technology world in high gear. As a countless number of engineers and scientists went to work developing this coveted ‘artificial intelligence’, they unexpectedly ran into a brick wall at full speed. The optimistic, glorious views of AI began to wane into what we now call the AI Winter. However, with recent new developments in computing technology, has the vision of AI been resurrected?


The reason so many people had such high expectations for artificial intelligence is most likely because of the apparent explosion in technology. When the first computer came out, there was nothing in the world that could compare to it. The development of the computer was a huge revolution. Before the computer, the closest technology was probably simply calculating machines. Then the introduction of the computer changed everyone’s lives forever. Everyone marveled at its remarkable speed, efficiency, and accuracy. As engineers programmed it to have more and more functions, the computer consistently broke old records and set new ones. Each and every one of the computer’s accomplishments was never seen before. The people were by no means naďve, they were simply on a technological adrenaline high. Engineers were constantly adding new functions and increasing power to these computers. The initial exponential growth of the computing technology overshot many expectations. For these reasons, it was natural for the people of the time to believe that the growth of the computer would never stop, and to envision what was to come.


Most engineers of the time believed that if they followed the same developmental path they have been taking, they would be able to produce the desired artificial intelligence. If it worked for developing the computer itself, it should work for developing AI. This proved to be a very wrong path to take to the creation of AI. The realization that artificial intelligence is much more difficult than programming different functions into a computer dawned on the programmers. It turned out that cramming more transistors on a microprocessor, was not the right approach to AI. As this realization set in, many turned their backs on the whole hypothesis and scrapped the project. AI went through a long period of disheartenment and suspension. However, such an idea could not stay dormant forever.


Through lengthy research and experimentation, recent engineers have been able to formulate new methods to developing AI. These methods attempt to pick up where the old methods failed. The new techniques aim primarily at breaking the boundary between the human mind and a computer. During the development of AI, it was realized that in order for AI to truly proliferate, one must be able to understand the human mind. This realization came about because it is in the mind that all the characteristics that make a human distinct and unique reside. This sudden insight into the direction to take on AI led to techniques such as neural networks, genetic algorithms, and other learning algorithms.


All of these new methods focus on one main aspect - mimicking the human biological intelligence. One of the most important insights into AI is the understanding of the learning aspect of intelligence. I believe that this was the single most significant step forward in the proliferation toward AI. The idea is based around a key concept of life. Humans have been evolving for thousands of years. This means that our minds have been learning and constantly adapting to changes in our lives. This is important because the mind and intelligence must be conditioned and trained over time. This same aspect must also be applied to computers and any artificially intelligent beings we ever create. You cannot simply program a lifetime’s knowledge into a computer program and expect it to react and perform exactly the same way a human would. A human takes past experiences and applies it to their knowledge in order to make the best decision now. Learning algorithms are most definitely a drastic change in the