Can computers understand the different meanings of words as humans do?

Can computers understand the different meanings of words as humans do?  At the end of Alice's Adventures in Wonderland—specifically in Through the Mirror and What Alice Found There—"My Dumbass" said scornfully, "When I use a word, it means only what I wanted to mean, no less and no more. " "So the question is how much can you make words mean so many different things," Alice replies.  The words carry many meanings, which is known as " semantic ambiguity ". The human mind has to analyze a complex web of information and use correct intuition to understand the precise meanings of these words.  Today's search engines, translation applications, and voice assistants are able to grasp and understand what we mean, thanks to language processing programs that give meaning to a staggering number of words, without us explicitly telling them what they mean. These programs infer meaning from the statistics and algorithms they use.  But we are now facing a new era of artificial intelligence in which the machine and the computer have been able to understand, analyze and predict complex data and predict its future outcomes. This is where another complex problem arises regarding the meanings of words understood by AI: Can it recognize different meanings of words?  That is why scientists are studying whether artificial intelligence can mimic the human brain in understanding words in the same way that humans do.  This was the subject of a research study conducted by researchers from the University of California, Los Angeles and the Massachusetts Institute of Technology, published in the journal Nature Human Behaviour on last April 14.  Artificial intelligence that mimics humans According to a press release published by the University of California, the study reported that AI systems can indeed learn very complex meanings. The study also indicated that the studied artificial intelligence system was able to encode the meanings of words in a way that is closely related to human estimates of the semantics of these words.  Thus, this approach could assign as much information to each single word as the human brain, according to the MIT press release .  Language models derive meaning by analyzing the number of times word pairs have been paired in different texts. These models then use those relationships to assess the similarities between the meanings of the words.  For example, these models conclude that the word "bread" and the word "apple" are more similar to each other than to the word "notebook." This is because "bread" and "apple" are often associated with other words such as "eat" or "snack", in contrast to the word "notebook" which is not paired with them.  Language model comprehension test for words The models were remarkably good at measuring the general similarity of words to each other. But most words carry many types of information, and their similarities depend on the quality of their evaluation.  "Humans can devise different mental scales that help them regulate their understanding of words. For example, dolphins and crocodiles may be similar in size, but one is much more dangerous than the other," says Gabriel Grand, the study's lead from the Massachusetts Institute of Technology.  The team tried to see if the models were able to pick up on those nuances as humans do. And if they are, how do these models organize the information?  To see how the words in this model correlate with human understanding of words, the team asked human volunteers to rank the words according to different scales (semantic scales): were the concepts conveyed by the words “big or small,” “safe or dangerous,” and “wet.” Or dry" etc? After the volunteers pinpointed the exact location of these words on those scales, the researchers tried to see if language processing models did the same thing.  Grand notes that language processing models use repetition statistics to organize words into a huge multidimensional array. The more the words are similar to each other on some scales, the closer they are to each other within the matrix.  large dimensional arrays He states that the dimensions of the area of ​​this matrix are vast, and there is no inherent meaning for these words in the matrix structure. Grand adds that there are "hundreds of dimensions for some of the words embedded in the matrix, and we have no idea what those dimensions mean."  The scientists looked at the semantic scales by which volunteers were asked to rate words, and asked themselves if these scales were also represented in language processing models. For example, the team investigated the location of dolphins and tigers on the "size" scale, then compared the distance between them with the distance in the "danger" scale.  Through more than 50 combinations of classifications and semantic metrics, the researchers concluded that language processing models ranked words much like humans do. The models rated dolphins and tigers as being similar in terms of 'size', while they were far apart on the 'danger' and 'humidity' scales. The language processing model organized words in a way that gives them different kinds of meanings, and it did so entirely based on the repetition of words in the context of the texts it learned from.  Interestingly, the language processing model classified the names "Betty" and "George" as similar in terms of the "old" scale, while they were far apart on the "sex" scale. The model also classified the words "weight lifting" and "fencing" as being similar in that they are both "indoor" sports, while they were different in terms of the amount of intelligence required.  The team notes that this demonstrates the power of language. It is through these simple statistics that we can retrieve a lot of rich semantic information, which provides a powerful source of knowledge about things that we may not have any direct cognitive experience.

At the end of Alice's Adventures in Wonderland—specifically in Through the Mirror and What Alice Found There—"My Dumbass" said scornfully, "When I use a word, it means only what I wanted to mean, no less and no more. " "So the question is how much can you make words mean so many different things," Alice replies.

The words carry many meanings, which is known as " semantic ambiguity ". The human mind has to analyze a complex web of information and use correct intuition to understand the precise meanings of these words.

Today's search engines, translation applications, and voice assistants are able to grasp and understand what we mean, thanks to language processing programs that give meaning to a staggering number of words, without us explicitly telling them what they mean. These programs infer meaning from the statistics and algorithms they use.

But we are now facing a new era of artificial intelligence in which the machine and the computer have been able to understand, analyze and predict complex data and predict its future outcomes. This is where another complex problem arises regarding the meanings of words understood by AI: Can it recognize different meanings of words?

That is why scientists are studying whether artificial intelligence can mimic the human brain in understanding words in the same way that humans do.

This was the subject of a research study conducted by researchers from the University of California, Los Angeles and the Massachusetts Institute of Technology, published in the journal Nature Human Behaviour on last April 14.

Artificial intelligence that mimics humans
According to a press release published by the University of California, the study reported that AI systems can indeed learn very complex meanings. The study also indicated that the studied artificial intelligence system was able to encode the meanings of words in a way that is closely related to human estimates of the semantics of these words.

Thus, this approach could assign as much information to each single word as the human brain, according to the MIT press release .

Language models derive meaning by analyzing the number of times word pairs have been paired in different texts. These models then use those relationships to assess the similarities between the meanings of the words.

For example, these models conclude that the word "bread" and the word "apple" are more similar to each other than to the word "notebook." This is because "bread" and "apple" are often associated with other words such as "eat" or "snack", in contrast to the word "notebook" which is not paired with them.

Language model comprehension test for words
The models were remarkably good at measuring the general similarity of words to each other. But most words carry many types of information, and their similarities depend on the quality of their evaluation.

"Humans can devise different mental scales that help them regulate their understanding of words. For example, dolphins and crocodiles may be similar in size, but one is much more dangerous than the other," says Gabriel Grand, the study's lead from the Massachusetts Institute of Technology.

The team tried to see if the models were able to pick up on those nuances as humans do. And if they are, how do these models organize the information?

To see how the words in this model correlate with human understanding of words, the team asked human volunteers to rank the words according to different scales (semantic scales): were the concepts conveyed by the words “big or small,” “safe or dangerous,” and “wet.” Or dry" etc? After the volunteers pinpointed the exact location of these words on those scales, the researchers tried to see if language processing models did the same thing.

Grand notes that language processing models use repetition statistics to organize words into a huge multidimensional array. The more the words are similar to each other on some scales, the closer they are to each other within the matrix.

large dimensional arrays
He states that the dimensions of the area of ​​this matrix are vast, and there is no inherent meaning for these words in the matrix structure. Grand adds that there are "hundreds of dimensions for some of the words embedded in the matrix, and we have no idea what those dimensions mean."

The scientists looked at the semantic scales by which volunteers were asked to rate words, and asked themselves if these scales were also represented in language processing models. For example, the team investigated the location of dolphins and tigers on the "size" scale, then compared the distance between them with the distance in the "danger" scale.

Through more than 50 combinations of classifications and semantic metrics, the researchers concluded that language processing models ranked words much like humans do. The models rated dolphins and tigers as being similar in terms of 'size', while they were far apart on the 'danger' and 'humidity' scales. The language processing model organized words in a way that gives them different kinds of meanings, and it did so entirely based on the repetition of words in the context of the texts it learned from.

Interestingly, the language processing model classified the names "Betty" and "George" as similar in terms of the "old" scale, while they were far apart on the "sex" scale. The model also classified the words "weight lifting" and "fencing" as being similar in that they are both "indoor" sports, while they were different in terms of the amount of intelligence required.

The team notes that this demonstrates the power of language. It is through these simple statistics that we can retrieve a lot of rich semantic information, which provides a powerful source of knowledge about things that we may not have any direct cognitive experience.
Previous Post Next Post