When the British mathematician Alan Turing designed the Turing test in 1950 as a means of measuring whether a computer was capable of demonstrating intelligent behavior equivalent to or indistinguishable from that of a human, he unleashed what was perhaps the first known benchmark for the performance of Artificial Intelligence systems.
Over the years, there have been many advances and the field has exploded with recent technologies where AI-powered machines and software are unlikely to need human supervision. Let’s take a look at the evolution of AI systems over the years since Turing devised that test.
After the advent of Industrial Revolution, the next big transformational wave for companies was AI, which led to the automation of simple processes/programs. These “first-wave” programs were designed to find efficient solutions for real-world problems. Programmers who were essentially trying to fix one problem would turn their insights into code. This approach led to the creation of deliveries optimization software.
Take for example the Microsoft Operating System, Google Maps, smartphone apps and the continuous updates we get, even traffic lights that allow people to cross the street at the press of a button. First wave AI systems are usually based on clear and logical rules. However, since parameters for each type of situation is identified in advance by human experts, they usually find it difficult to tackle new situations and had a hard time dealing with uncertainty.
This led to the development of the second wave of AI systems around 2010 where engineers developed statistical models for certain types of problems instead of precise rules for the systems to follow. They would then ‘train’ these models through Deep Learning Neural Networks to make them more precise and efficient.
Unlike the First Wave AI systems, these could learn and adapt themselves to different situations if properly trained through machine learning algorithms and are highly successful in understanding the world around them. Face recognition, speech transcription and identifying animals and objects in pictures are examples of real-world application of the Second Wave of AI systems.
However, there was no way to know how the system used the input to figure out the output and the data that was used to reach the decision. Remember, in 2016, Microsoft had to yank the new Twitter bot ‘Tai’ less than 24 hours after its debut after it began spreading Nazi propaganda?
This brings us to the Third Wave of AI systems. Despite all the progress made so far, AI still has a long way to go in terms of developing human level thinking, learning and problem solving ability—a state called the Artificial General Intelligence or AGI.
In the Third Wave, the AI systems themselves will construct models that will explain how the world works. They will be able to rely on several different statistical models and train themselves to reach a more complete understanding of the world and also potentially develop abstract thinking.
However, many researchers agree the vast majority of AI systems are nowhere near the human general cognitive ability. And applying a cognitive architecture is not adequate to achieve real intelligence. To be truly effective, these cognitive components need to be deeply integrated and tightly coupled with short-term memory, context and reasoning.
The First and Second Waves gave us a glimpse of what AI can do for us; we’re now looking at the Third Wave to fully realize the true potential of AI. This would include, most necessarily, the ability of an AI system to learn autonomously in real time, to generalize, and to be able to reason abstractly and use natural language.
“In the first wave of AI you had to be a programmer. In the second wave of AI you had to be a data scientist. In the third wave of AI — the more moral you are the better.”
Digital Magazine for startups and small businesses. Bizztor creates & curate content for entrepreneurs looking to start and grow a business.