Chinese scientists have developed a new neural network that allows artificial intelligence (AI) to form concepts from raw sensory data such as vision and hearing, simulating fundamental aspects of human cognition, according to a study recently published in the journal Nature Computational Science.
One of the remarkable abilities of the human brain is to form more abstract conceptual representations of sensorimotor experiences and apply them flexibly without relying on direct sensory input.
However, the computational mechanisms underlying this ability have previously been poorly understood.
This means that large language models are fundamentally limited by their reliance on existing linguistic data, making them unable to spontaneously generate new concepts from experience-based learning.
Researchers from the Institute of Automation under the Chinese Academy of Sciences (CAS) and Peking University proposed their new neural network framework, CATS Net, as a way to overcome this limitation.
The framework consists of a concept abstraction module and a task-solving module that can precisely instruct the framework to perform tasks such as recognition and judgment when processing visual information, such as images.
The framework can also autonomously generate a variety of new concepts, constructing its own unique "concept space." Once the concept spaces of different AI systems are aligned, they can directly transmit knowledge using those concepts, without the need for retraining with raw data. This process simulates how humans communicate using language.
Through brain imaging studies, the researchers revealed that the conceptual space constructed by CATS Net closely aligns with human cognitive and linguistic logic, and its operational mode closely resembles activity in the concept-processing areas of the human brain.
This shows that the model does more than simply mimic brain function, but also provides insight into the computational mechanisms that humans use to form and use concepts in the brain.
