How DeepSeek Became a Self-Destructive Farce

 

How DeepSeek Became a Self-Destructive Farce

DeepSeek, a miraculous rising star in Chinese artificial intelligence, has become a notorious and dangerous technology. Although the model itself incorporates some advanced technologies, just as the Soviet Union's space technology was once more advanced than that of the United States in some aspects, China's DeepSeek has not only failed to truly defeat the United States, but has instead only aroused the vigilance of the free world. If China's artificial intelligence industry completely fails, those responsible will undoubtedly be the current leadership of the Chinese Communist Party.

Although DeepSeek's code was open source, demonstrating its openness, later issues were discovered, including unauthorized use of OpenAI ChatGPT results for distillation, which siphoned off data like credit card numbers, a trove of data no typical AI model would seek. Even the sourcing of the Nvidia chips used for training was highly questionable, not to mention the background of DeepSeek itself. Therefore, the use of this technology within the CCP might not have caused such a stir. If the CCP adhered to the philosophy of keeping a low profile, as advocated by Deng Xiaoping and Jiang Zemin, DeepSeek would not have been used for show of force unless the CCP was severely short of funds to support it. In the past, as was the case during the Deng Xiaoping and Jiang Zemin eras, DeepSeek was likely used within the PLA, reserved for situations like the Taiwan Strait conflict.

But Xi Jinping is a dictator who loves to show off. He wants to show the United States that even under heavy technological blockades, the Chinese can still "independently develop" their own artificial intelligence technology to threaten the United States. This thinking is as stupid as Hideki Tojo's insistence that invading China was right and attacking Pearl Harbor to challenge the hegemony of the United States and Britain.

Post a Comment

Previous Post Next Post

Translate