10 Milestones in the History of AI

by Johan Tobias

The development of modern artificial intelligence systems has been a long time in the making, dating back to the earliest days of automated machine algorithms and robotics. Here are 10 of the most critical moments in the advancement of robotics, and in giving those robots the ability to think…

10. Charles Babbage’s Difference Engine – 1822

Charles Babbage’s Difference Engine – a computational device designed in the 1820s – could be called the earliest machine in history capable of artificial intelligence. The invention sought to automate tedious adding and multiplication tasks using metal wheels and levers, which could in turn be used to store numbers and perform complex calculations, at least for the time. 

While Babbage created various prototypes of the Difference Engine, the full machine was not constructed in his lifetime. His work later resulted in the development of the Analytical Engine – a more-advanced theoretical machine powered by steam that, if constructed, would have been the first operational, general-purpose computer. Babbage believed it could perform any calculation using a list of instructions – a concept shared by modern computers. 

Charles Babbage’s Difference Engine was a giant leap towards artificial intelligence, even if his ideas remained unrealized during his lifetime. Because of its importance, the invention is sometimes referred to as the beginning of the age of artificial intelligence

9. Leonardo Torres Quevedo’s Chess Automaton – 1914

Leonardo Torres Quevedo was a Spanish engineer credited with the creation of the El Ajedrecista automated chess player – perhaps the first machine in history that could think and apply its thoughts to practical situations. Torres Quevedo’s machine was programmed to play a modified endgame called KRK, where it played a king and a rook against a human opponent’s single king. The machine used complex conditional rules to make decisions on its own, and its design allowed it to detect and respond to illegal moves.

Torres Quevedo belonged to an early group of scientists and thinkers that wanted to build machines that could truly think on their own, with the capability of making choices from a complex set of possibilities. His 1914 essay, Essays on Automatics, laid out ideas for machines capable of performing arithmetic using switching circuits and sensors, making automata with emotions a possibility. 

Despite his pioneering contributions to artificial intelligence, Torres Quevedo’s work was largely overlooked outside Spain because of his numerous achievements in other engineering fields, like funiculars, aeronautics, and remote control devices. 

See also  10 Ways The Bank is Stealing From You

8. The Turing Test – 1950

Developed by Alan Turing in 1950, the Turing Test remains a benchmark measure of a machine’s ability to exhibit human-like intelligence even today. At its most basic, the test involves a human being in a text-based conversation with a human and a machine, attempting to tell one from the other. If they can’t succeed, the machine is considered to have passed the Turing Test. 

Since its inception, it has been a standard in evaluating the progress of AI systems around the world, inspiring a number of studies and experiments to develop machines capable of passing the test as a measure of AI advancement. While it has its limitations, especially for the advanced AI systems of today, the Turing Test remains an important reference point in the field of artificial intelligence, and a starting-off point for the development of more-rigorous tests to identify artificial intelligence in the future. 

7. First Industrial Robot – 1954

Unimate was the world’s first industrial robot introduced in 1961, released after almost two years of development. It was a revolutionary step not just for manufacturing, but also for automated systems, as this was the first time automated bots were deployed for use in an industrial capacity. 

Unimate was first used at a General Motors die-casting plant in Trenton, New Jersey, and its efficiency led to the development of about 450 more Unimate robotic arms used across the die-casting industry. Soon, the robot was being used for industrial use around the world, including at companies like BMW, Volvo, Mercedes Benz, British Leyland, and Fiat. 

6. The Dartmouth Conference – 1956

The term ‘artificial intelligence’ was coined at the Dartmouth conference, held in 1956 by a group of leading inventors in the field at Dartmouth College. Led by John McCarthy, it was organized to explore the potential of thinking machines beyond simple behaviors. While they didn’t know it at the time, the conference managed to clarify and develop the ideas that would eventually lead to the evolution of modern artificial intelligence, all based on a revolutionary idea that computers should now simulate the kind of intelligence found in human beings. 

The discussions that took place at the Dartmouth conference set the foundation for AI as an interdisciplinary field. It had a lasting impact on various fields other than AI, as well, including engineering, mathematics, computer science, and psychology

See also  Top 10 Images That Show The Positive Side Of The Coronavirus Pandemic

5. First Artificial Neural Network – 1958

In 1958, Frank Rosenblatt, a research psychologist and project engineer at the Cornell Aeronautical Laboratory, announced the release of the Perceptron – the first machine capable of generating ideas on its own. It was a single-layer neural network inspired by the brain’s neurons and their interconnected communication. It was capable of making binary classifications, firing an output of 1 if the result exceeded a certain threshold, or 0 otherwise. Though limited to solving linear classification problems, the Perceptron’s basic learning algorithms marked the first generation of neural networks in history. 

Despite popular interest from the computing community and media, the Perceptron didn’t do well in the market and indirectly led to a phenomenon we now call the ‘AI winter’, where federal research for AI projects was curtailed for a number of years. Now, however, Rosenblatt’s invention is widely recognized as an important step in the evolution of artificial intelligence, as deep learning and artificial neural networks are now integral parts of modern AI systems. 

4. John McCarthy’s Development Of LISP – 1960

LISP, short for ‘list processing’, is a computer-programming language developed by John McCarthy in the early 1960s. Its foundation lies in the theory of recursive functions, where a computer function can appear in its own definition. Unlike traditional procedural languages like FORTRAN, LISP treats a program as a function applied to data. This unique approach allows LISP programs to operate on other programs as data, making it an ideal language for AI programming.

While the language didn’t gain much traction in other area, LISP has grown to become the main choice of language for artificial intelligence research, especially due to its strong connection with AI work at MIT and its support for self-modifying programs. Today, most AI programmers work with it for research and development purposes, particularly in areas of natural-language processing, theorem proofs, and problem-solving. 

3. First Chatbot – 1966

The first chatbot was released to the public in the mid-1960s, developed by a German-American computer scientist at MIT’s artificial intelligence lab called Joseph Weizenbaum. Running on the early mainframe computer IBM 7094, ELIZA was designed to engage in natural conversations with users by scanning for keywords and using them to form responses. Interestingly, many of its users ended up forming bonds with the chatbot, even after being told that the program lacked true human comprehension – a phenomenon now known as the ‘ELIZA effect’.

See also  The Best Times in History to Be Alive

ELIZA’s development was a major milestone in the evolution of artificial intelligence, paving the way for the study of human-computer interactions and chatbot technology. Weizenbaum’s early experiments with ELIZA challenged the conventional idea that true intelligence was necessary for convincing communication, instead proving the simulation of intelligence could be enough to deceive people and create a sense of connection with a machine. 

2. IBM’s Deep Blue Defeats World Chess Champion -1997

Deep Blue was a revolutionary IBM computer specially designed to play chess. In 1997, it achieved something no machine had ever done before. In a six-game match, Deep Blue secured two wins, one draw, and one defeat against the reigning chess champion of the time, Garry Kasprov, marking the first time a computer won in a traditional chess tournament over a human opponent.

Deep Blue’s victory was a groundbreaking moment in artificial intelligence and computer science in general, showcasing the sheer calculating power possessed by machines at the time – as the computer was capable of analyzing up to 200 million chess positions per second – and their potential for complex tasks beyond chess. The match attracted media coverage from around the world, paving the way for all the other man-vs-machine matches we’ve seen since then.

1. ‘Stanley’ Robot Wins DARPA Challenge – 2005

The 2005 edition of the DARPA Grand Challenge – an important competition in the field of technology – made history when it was won by Stanford University’s autonomous robot, Stanley. It required the development of mobile ground robots to navigate 132 miles of harsh desert terrain in under ten hours. The Stanford team, made up of 65 students, professors, engineers, and programmers, converted a 2005 Volkswagen Touareg into a sophisticated, autonomous robot capable of self-driving. Its success heavily relied on advanced artificial intelligence and a pipelining architecture that converted sensor data into vehicle controls, enabling Stanley to understand and navigate the course.

The vehicle was equipped with state-of-the-art hardware, including roof-mounted light detection and ranging units that reflected lasers off the ground to create a 3D map of the terrain, guiding the robot’s path. Various complex algorithms allowed Stanley to process the data from different sensors and make informed decisions, along with learning from previous mistakes to continuously improve its performance.

You may also like

Leave a Comment