A long time ago, we started dreaming about autonomous machines that help us simplify our work There is evidence of this that can be traced back to several hundred of years in the history of countries like Greece and Egypt. In 1920, a Science fiction drama featured a factory called Rossum’s Universal Robots that makes artificial people. The term ‘Robot’ was first coined in this show. Then people started to depict autonomous machines in comics and movies Only in 1940s people started to think seriously on artificially creating a human like intelligence In 1943, Warren McCulloch , a neurophysiologist and his friend Walter Pitts wrote a paper on how neurons in our brain might work and this became the starting point for all neural technology available today. At this time, serious discussions were on rise on where to draw a line inorder to agree that a machine exhibits intelligence In 1950 Alan Turing, a computer scientist proposed a paper called “Computing Machinery and Intelligence”. In this paper, he suggested a test to predict whether a machine is intelligent or not. He named it “The Turing Test” In this test, there will be a human judge to evaluate the conversations between the human and a machine. The conversation is limited to text only with a screen and keyboard. The evaluator knows that one of the two is a machine. If the evaluator is unable to tell the machine from the human, the machine is said to have passed the test and we can conclude that the machine is intelligent. The term Artificial Intelligence was first coined in a conference held in 1956 at Dartmouth College. Jhon McCarthy, Marvin Minsky, and others who became popular later attended this conference and later become the popular figures in AI. In this conference, they introduced a small AI program called “Logic Theorist”, to write proofs for mathematical theorems. These programs were unable to solve complex mathematical problems or anything other than writing a simple proof for theorems However, at that time, this program was seen as a major breakthrough in AI Research. After the popularity of logic theorist, many people started to develop applications like the logic theorist In 1958, LISP programming Language was introduced in the Industry and that used a List Processor to process the instructions. LISP programming soon become a popular programming language in AI . These programs were told explicitly by symbolic logic to perform within certain boundaries. With the help of lisp programs researchers were able to bring this logic quite easily. The source code of LISP itself is a list. Certain lists might contain another list. Later many people have come up with their idea of intelligent machines. One interesting project was ELIZA. ELIZA was an AI Program introduced in 1966 which behaves like a psychotherapist. Actually , these programs were not intelligent but was programmed to behave like an intelligent system. In 1970, Marvin Minsky and Seymour Papert proposed an idea of Micro AI, where an intelligent program can solve simple problems in our Micro World The knowledge for the Micro AI should come from an expert in that particular field. For example, what is the color of this rubik’s cube? What is the shape of this ? Will a sharp edged object be positioned on a flat surface? and many micro level problems will be handled with intelligence by an AI program But back then, computing power was very costly and many AI research programs were mainly funded by Government Agencies. But the Capabilities of those AI programs were extremely limited and they were seen like toys. Everywhere AI researchers were facing difficulties with the limited computing power. In 1974, Many agencies stopped funding AI research as it did not deliver anything that was promised. So AI research had its first brief winter until the 1980s. Later in 1980, AI had a come-back after a brief standby period. During this time, people tried to mimic something like a human brain with the help of Artificial Neural Networks. Artificial Neural networks are considered the key to unlocking the true potential of AI. This idea brought back the funding and AI research continued. One notable project was from Japan’s Fifth Generation Computer project In 1981 Japan sanctioned $850 Million to build AI programs to translate languages, analyze and report from pictures, and to build the ability to reason like a human being. Though these programs can do the given task efficiently, all these programs lack the so-called common sense knowledge. It is extremely difficult for a computer to differentiate what is relevant and what is irrelevant. Subconsciously we do many things by applying the knowledge gathered by common sense If you go inside your home, you don’t consciously think that you should not walk on chairs and tables. You know these things because you have common sense. But it is extremely difficult for a computer to know this. To counteract this common sense problem researchers were taking different directions. In 1984, one interesting research team decided to solve this common sense problem by facing it head on. The Cyc Project was started. The team has been developing a large computational database for a number of years by hand coding all the things that would mean common sense, like you should not walk on the table, chair and why you should not do this, etc. The Cyc team believe, this would enable computers to have access to the knowledge that makes up common sense. In fact, people in this project are continually feeding computers with everyday news from newspapers with the hope that some day, the computers will have enough knowledge to begin learning by themselves. In 1987, the Japanese Fifth Generation computer project became a major failure Researchers realized AI research needs a hell of a lot of computing power and memory with lesser processing time to build something useful. and with the introduction of Personal computers, many people believed PCs as more powerful than a costly Lisp machine. So once again in 1987, AI research had another brief winter. Later in 1993 several AI programs started to appear again due to the availability of more computing power. In 1997, IBMs Deep Blue became the first computer to beat a world chess champion. People were talking about AI for several decades. But between 1993s and 2010 AI research was done mostly in research labs and with a limited number of people. Nobody cared about AI. Then suddenly after 2010, it started to rise on a bigger scale. Why are they popularizing AI again? Well, mainly there are 2 reasons for this noise. One is the availability of cheap data.
The other is the availability of high-power Graphics Cards Now you might have so many questions. Does Turing test has any significance today? what is the connection between graphics cards and artificial intelligence? The AI research had so many winter periods. Is it possible for another AI winter in the future? Well, We will try to unwind the mysteries in the upcoming videos. Clock here to watch the modern day AI in part-3 of this video series. Thanks for watching this video. See you again!