AI is getting smarter every day, but it still can’t match the human mind

By guy Perelmuter

June 14, 2021

Artificial intelligence research can be subdivided in different ways: as a function of the techniques used (such as expert systems, artificial neural networks, or evolutionary computation) or of the problems addressed (e.g., computer vision, language processing, or predictive systems). Currently, one of the most commonly used artificial intelligence techniques for the development of new applications is known as machine learning. In basic terms, machine learning seeks to present algorithms with the largest possible volume of data, allowing systems to develop the capacity to autonomously draw conclusions. A simple way to describe the process is as follows: If we want to teach an image recognition system to identify a key, we show it the largest number of keys possible for its training. Then, the structure itself learns to identify whether subsequent images presented are or are not keys—even if the system never saw these images during its training.

Recognizing an image used to be a task in which humans had a clear advantage over machines—until relatively recently. Initiatives such as the ImageNet project, formulated in 2006, have served to significantly reduce this difference. Led by Chinese American researcher Fei-Fei Li, a computer science professor at Stanford University who also served as director of the Stanford Artificial Intelligence Lab (SAIL), the ImageNet project consists of a database with nearly 15 million images that have been classified by humans.

This repository of information is the raw material used to train the computer vision algorithms and is available online free of charge. To boost development in the area of computer image recognition, the ImageNet Large Scale Visual Recognition Challenge (ILSVRC) was created in 2010 where systems developed by teams from around the world compete to correctly classify the images shown on their screens. The evolution of the results obtained over less than a decade is proof of the extraordinary advances made in the field of deep learning (currently one of the most-used techniques in artificial intelligence, and a key enabler of—you guessed it—deep tech). In 2011, an error rate of 25% was considered good; in 2017, of the 38 teams participating, no less than 29 obtained an error rate lower than 5%.

For decades, the development of computer programs was based on the equation “rules + data = results.” In other words, the rules were entered beforehand, input data was processed, and results were produced. But the paradigm used by systems based on deep learning is substantially different and seeks to imitate the way humans learn: “data + results = rules.”

Typically implemented through artificial neural networks (structures that are able to extract the characteristics necessary for the creation of rules from the data, and to produce results), these systems are on the front lines of platforms for facial recognition, voice recognition, computer vision, diagnostic medicine, and more. Once a sufficiently large set of examples (data) is presented with its respective classifications (results), the system obtains an internal representation of the rules—and becomes able to extrapolate the results for data it has not seen before.

Doing the right thing

Although systems based on deep learning are able to improve the accuracy of virtually any classification task, it is essential to remember that their accuracy is highly dependent on the quality and type of data used during the learning phase. This is one of the biggest risk factors for the use of this technology: If the training is not done carefully, the results can be dangerous. In a 2016 study, three researchers from Princeton University—Aylin Caliskan, Joanna Bryson, and Arvind Narayanan—used nearly a trillion English words as input data. The results indicated that “language itself contains historic biases, whether these are morally neutral as toward insects or flowers, problematic as toward race or gender, or even simply veridical, reflecting the distribution of gender with respect to careers or first names.”

Machines do not have their own free will; they always follow the instructions of their programmers.

Also in 2016, the monthly magazine of the Association for Computing Machinery (the world’s largest international learning society for computing, founded in 1947) published an article by Nicholas Diakopoulos (a PhD in computer science from the Georgia Institute of Technology) entitled “Accountability in Algorithmic Decision Making.” If so-called intelligent systems do continue their expansion into different areas of business, services, and governments, it will be critical that they not be contaminated by the biases that humans develop, whether consciously or subconsciously. It is likely that the ideal model will involve collaboration among machines and humans, with the latter likely to be responsible for making decisions on topics with nuances and complexities not yet fully understood by models and algorithms.

The perception of the significance of future changes in practically all industries is reflected in the increase in investments in startups from the sector: According to the firm CB Insights, this figure went from less than $2 billion in 2013 to more than $25 billion in 2019. Tech companies like Google, Microsoft, Apple, Facebook, and Amazon already incorporate intelligent techniques into their products and are moving toward a future where virtually all of their business lines will have a built-in machine learning component. This can apply to all types of applications: automatic simultaneous interpreting during a call, recommendations for whatever we want (or will want) to purchase online, or correct voice recognition in interactions with our cell phones.

One of the big challenges for companies is to define the best way of using this set of new techniques, which will contain probabilistic aspects in their outputs. In other words, the algorithms estimate a solution to a given problem, with no guarantee that it is actually the best solution. Either the process is robust and reliable, as a function of the quality of implementation and of the techniques used, or the results will be harmful to the financial health of the company in question.

Peace and war: machines have no free will

The integration of artificial intelligence mechanismsand weapons offers the possibility of truly autonomous weapons (autonomous weapons systems or lethal autonomous weapons). An armed drone equipped with facial recognition software could be programmed to kill a certain person or group of people and then to self-destruct, making it practically impossible to determine its source.

Machines do not have their own free will; they always follow the instructions of their programmers. These arms present significant risks, even when used only for defense purposes (a tenuous line for sure), and they evoke images of the killer robots that science-fiction authors have been writing about for decades.

In 2015, at the International Joint Conference on Artificial Intelligence, a letter advocating that this type of weapon be abolished was signed by theoretical physicist Stephen Hawking (1942–2018), entrepreneur Elon Musk, and neuroscientist Demis Hassabis (one of the founders of DeepMind, which was acquired by Google in 2014), among others. The discussion is still ongoing, but there are historical examples that speak to the benefits of the involvement of humans such as Vasili Arkhipov (1926–1998) in life-and-death decisions.

In April of 1962, a group of Cuban exiles sponsored by the US Central Intelligence Agency failed in their attempt to invade the Bay of Pigs in Cuba. To prevent a future invasion, the Cuban government asked the Soviet Union to install nuclear missiles on the island. After obtaining unequivocal proof that these missiles were in fact being installed, the United States mounted a naval blockade to prevent more missiles from getting to the island and demanded the removal of those that had already been installed, which were just 150 km (90 mi) from Florida. In October of 1962, the world watched as tensions between the United States and the Soviet Union mounted and reached their peak.

On October 27, when a Soviet B-59 submarine was located in nearby international waters, a crew from the US Navy dropped depth charges near the vessel to force it to surface. With no contact from Moscow for several days and unable to use the radio, the submarine’s captain, Valentin Savitsky, was convinced that the Third World War had begun, and he wanted to launch a nuclear torpedo against the Americans. But the decision to launch a nuclear weapon from the B-59 needed to be unanimous among the three officials: Captain Savitsky, political officer Ivan Maslennikov, and second-in-command Vasili Arkhipov, who was only 39 at the time. He was the only one to dissent and recommend that the submarine surface in order to contact Moscow. Despite evidence that pointed to war, Arkhipov remained firm and actually saved the world from a nuclear conflict.

Adapted with permission from Guy Perelmuter’s Present Future: Business, Science, and the Deep Tech Revolution, published by  Fast Company Press.

Guy Perelmuter is the founder of GRIDS Capital, a deep tech venture capital firm focusing on artificial intelligence, robotics, life sciences, and technological infrastructure.

(58)