Can AI alone make decisions?


Can AI alone make decisions?


Deep Neural Networks and Brains: Methodological Differences and Similarities.

Hears, sees and speaks, but does it make a decision?
Creating human-like AI is more than just simulating human behavior; Technology must also be able to process information, or "think", in a way similar to how humans process information if the goal is to rely on it without human interference.

In new research, published in the journal "Patterns" and led by those in charge of the School of Psychology and Neuroscience at the University of Glasgow, uses 3D modeling to analyze the way deep neural networks - part of the broader family of machine learning - process information, to visualize how Their information processing matches that of humans.

The authors of the study hope that this new work will pave the way for the creation of a reliable artificial intelligence technology that can process information in a way similar to human ways, and thus make errors that we can understand and predict.

Inconsistencies and errors
One of the challenges still facing the development of AI is how to better understand the machine's thinking process, and whether it matches how humans process information, in order to ensure accuracy. Deep neural networks are often presented as the current best model of human decision-making behavior, achieving or even exceeding human performance on some tasks. However, even deceptively simple visual discrimination tasks can reveal obvious inconsistencies and errors from AI models, when compared to humans.

Currently, deep neural network technology is used in applications such as facial recognition, and although it is very successful in these areas, scientists are still completely unable to understand how these networks process information, and therefore also unable to provide an explanation for the occurrence of errors.

Although deep networks are a good model for understanding the brain's perception of things, they work differently from the human brain

Deep neural networks are machine learning systems inspired by the network of brain cells or neurons in the human brain, which can be trained to perform specific tasks. These networks have played a pivotal role in helping scientists understand how our brains perceive the things we see.

Although deep networks have developed greatly over the past decade, they are still a far cry from the performance of the human brain in perceiving visual cues.

In the new study, the research team tackled this problem by modeling the visual stimulus given by the deep neural network, and then transforming it in multiple ways so that they could show similar recognition, by processing similar information between humans and the AI ​​model.

Professor Philip Shins, senior author of the study and head of the Institute of Neurosciences and Technology at the University of Glasgow said: “When building AI models that behave like humans, for example recognizing a person's face when they see they are human, we must make sure that the AI ​​model uses the same information from the face. that another person uses to identify him. If AI does not do this, we may have the illusion that the system works just like humans, but then finds that it errs in some new or untested condition.”

Using a series of customizable 3D faces, the researchers asked humans to rate the similarity of these randomly generated faces to four familiar identities. They then used this information to test whether deep neural networks made the same classifications for the same reasons — testing not only whether humans and AI made the same decisions, but also whether they were based on the same information.

Importantly, through their approach, the researchers can visualize these findings as 3D faces driving the behavior of humans and networks. For example, the network that correctly categorized 2,000 identities was largely driven by a cartoonish face, indicating that it identified faces that process facial information completely different from humans.

Modeling the human brain
Robots: programming or innovation?
When deep neural networks were first developed in the 1980s, neuroscientists hoped that such systems could be used to model the human brain. However, computers of that era were not powerful enough to build models large enough to perform real-world tasks such as object recognition or speech recognition.

Over the past five years, advances in computing power and neural network technology have made it possible to use neural networks to perform challenging tasks in the real world, and they have become the standard approach in many engineering applications. In parallel, some neuroscientists have re-examined the possibility of using these systems to model the human brain.

This was an exciting opportunity for neuroscience, where scientists were able to create systems that could do some of the things that people could do, and then interrogate the models and compare them to the brain. In this regard, the MIT researchers trained neural networks to perform two auditory tasks, one involving speech and the other involving music.

For the speech task, the researchers gave the model thousands of two-second recordings of a person speaking. The task was to identify the word in the middle of the syllable. For the Music task, the model was asked to select a genre of music with a duration of about two seconds. Each clip also included background noise to make the task more realistic (and more challenging).

After several thousand examples, the model learned to perform the task with the same accuracy as a human listener. The idea is that over time the model gets better and better at the task. The hope is that he learns something general.

The Thatcher phenomenon
Nobody looks at the methodological differences
The model also tended to make mistakes in the same passages that humans make most mistakes.

In another recent study, SP Arun, assistant professor at CNS, and his team compared the different qualitative characteristics of these deep networks with those of the human brain.

In the current study, published in the journal Natural Communications, Aaron and his team attempted to understand the visual tasks that these networks can perform naturally by virtue of their architecture, and which require further training.

Although deep networks are a good model for understanding how the human brain perceives things, they function differently from the human brain. While complex computations are trivial to them, some tasks that are relatively easy for humans to solve are difficult to complete on these networks.

Lots of studies have shown similarities between deep networks and brains but no one has really looked at the methodological differences

The team studied 13 different cognitive effects and revealed previously unknown qualitative differences between deep networks and the human brain. An example of this is the so-called Thatcher effect, a phenomenon in which humans find it easier to recognize changes in local features in a vertical image, but this becomes difficult when the image is turned upside down.

Deep networks trained to recognize straight faces showed the Thatcher effect when compared to networks trained to recognize objects. Another visual property of the human brain, called mirror confusion, was tested on these networks. To humans, mirror reflections along the vertical axis look more similar than those along the horizontal axis. The researchers found that deep networks also show stronger mirror confusion for vertical images than for horizontally reflected images.

Another phenomenon specific to the human brain is the focus on the overall shape first. This is known as the global advantage effect. For example, in a picture of a tree, our brain will first see the tree as a whole before noticing the details of the leaves in it. Similarly, when presenting an image of a face, humans first look at the face as a whole, and then move on to focus on finer details such as the eyes, nose, mouth, etc., explains Jorgen Jacob, study co-author and PhD student at the institute. Neuroticism showed a local advantage.” This means that unlike the brain, networks focus on the finer details of the image first. So, although these neural networks and the human brain perform the same object recognition tasks, the steps that the two follow are very different.

“A lot of studies have shown similarities between deep networks and brains, but no one has really looked at the methodological differences,” says Aaron, who is the study's senior author.

Identifying these differences could prompt us to bring these networks closer to the brain. Such analyzes can help researchers build more robust neural networks that not only perform better but are also immune to "hostile attacks" intended to derail them.

The researchers hope this work will pave the way for more reliable artificial intelligence technology that behaves like humans and reduce unpredictable errors in the future.
Previous Post Next Post