China's DeepSeek has launched its latest artificial intelligence model, version 4 (V4), in a new move that reflects the escalating global competition in this sector.
The company explained that the new model comes in two versions, DeepSeek-V4-Pro and DeepSeek-V4-Flash, noting that the second version is characterized by higher efficiency and lower operating costs, while the professional version offers stronger capabilities in inference, knowledge and analysis.
The Hangzhou-based company confirmed that DeepSeek-V4-Pro has an ultra-long context of up to one million characters, giving it a great ability to understand long texts and process huge amounts of data, which it described as a new achievement in the field of open-source models.
She added that this version achieves advanced performance in intelligent agent capabilities, global knowledge and inference, and it surpasses most other open-source models, preceded only by Google’s Gemini-Pro-3.1 model, which is a closed-source model.
DeepSeek-V4-Pro is also equipped with what the company describes as a "maximum reasoning effort mode," a feature designed to further enhance the model's cognitive capabilities and solidify its position as the best open-source model currently available.
The company noted that the new model was improved to better match domestic Chinese processors, given the increasing restrictions imposed by the United States on exports of advanced semiconductors to China, especially graphics processing units (GPUs) necessary for developing artificial intelligence models.
Although the company did not disclose the type of chips used to train the V4 model, it explained that its software components are designed to work with chips from Nvidia and Huawei.
So far, the company has announced that the model is capable of processing up to 384,000 symbols, explaining that symbols represent the basic unit of data that artificial intelligence models deal with, and may be words or letters. The faster the model can process them, the more efficient it is at learning and responding.
DeepSeek confirmed that the new release achieves a "quantum leap in computational efficiency" thanks to its support for a context of up to one million characters, opening the door to a new generation of models capable of handling more complex and lengthy texts and scenarios.
In a direct comparison, the company said that DeepSeek-V4-Pro outperforms Google's Gemini-3.1-Pro in understanding long texts, but is still less efficient than Anthropic's Claude Opus 4.6 model.
This launch comes after the huge buzz the company generated last year when it introduced its R1 model, which rivaled the performance of AI systems like ChatGPT despite being developed at a much lower cost, causing significant turmoil in the stock markets at the time.
The company concluded its statement by emphasizing that through this release it seeks to enhance the model's intelligence and practicality, and expand its use in various daily and professional tasks and scenarios.
