✨ Explore this insightful post from TechCrunch 📖
📂 **Category**: AI,deepseek,Deepseek V4,open source ai
📌 **What You’ll Learn**:
Chinese AI lab DeepSeek has launched two preview versions of its latest major language model, DeepSeek V4, a long-awaited update to last year’s V3.2 model and the accompanying R1 inference model that took the AI world by storm.
The company says both DeepSeek V4 Flash and V4 Pro are hybrid expert models with contextual windows each containing 1 million tokens — enough to allow codebases or large documents to be used in claims. The expert mixture approach involves activating only a certain number of parameters for each task to reduce inference costs.
The Pro model has a total of 1.6 trillion parameters (49 billion active), making it the largest open-weight model available, surpassing Moonshot AI’s Kimi K 2.6 (1.1 trillion), MiniMax’s M1 (456 billion), and more than double DeepSeek V3.2 (671 billion). The smaller version, V4 Flash, has 284 billion parameters (13 billion active).
DeepSeek says both models are more efficient and performant than DeepSeek V3.2 due to architectural improvements, and have nearly “closed the gap” with current leading models, both open and closed, in inference benchmarks.
The company claims that the new V4-Pro-Max model outperforms its open source peers across reasoning benchmarks, and outperforms GPT-5.2 and OpenAI’s Gemini 3.0 Pro on some tasks. In programming competition benchmarks, DeepSeek said the performance of both V4 models is “comparable to GPT-5.4.”

However, the models seem to lag slightly behind the frontier models in knowledge tests, specifically OpenAI’s GPT-5.4 and Google’s latest Gemini 3.1 Pro. This lag indicates “an evolutionary path that lags behind modern models by about 3 to 6 months,” the lab wrote.
V4 Flash and V4 Pro support text only, unlike many of their closed source peers, which offer support for understanding and creating audio, video, and images.
TechCrunch event
San Francisco, California
|
October 13-15, 2026
Notably, the DeepSeek V4 is much less expensive than any frontier model available today. The smaller V4 Flash model costs $0.14 per million input codes and $0.28 per million output codes, undercutting GPT-5.4 Nano, Gemini 3.1 Flash, GPT-5.4 Mini, and Claude Haiku 4.5. Meanwhile, the larger V4 Pro model costs $0.145 per million input codes and $3.48 per million output codes, which also undercuts the Gemini 3.1 Pro, GPT-5.5, and Claude Opus 4.7, GPT-5.4.
This launch comes a day after the United States accused China of stealing the intellectual property of American artificial intelligence laboratories on an industrial scale using thousands of proxy accounts. Anthropic and OpenAI have accused DeepSeek itself of “distilling” and copying their AI models.
When you buy through links in our articles, we may earn a small commission. This does not affect our editorial independence.
🔥 **What’s your take?**
Share your thoughts in the comments below!
#️⃣ **#DeepSeek #previews #model #closes #gap #leading #models**
🕒 **Posted on**: 1777091784
🌟 **Want more?** Click here for more info! 🌟
