Chinese AI firm DeepSeek has made yet another splash with the release of V3.2, the latest iteration in its V3 model series. Launched Monday, the model, which builds on an experimental V3.2 version announced in October, comes in two versions: "Thinking," and a more powerful "Speciale." DeepSeek said V3.2 pushes the capabilities of open-source AI even further. Like other DeepSeek models, it's a fraction of the cost of proprietary models, and the underlying weights can be accessed via Hugging Face.
Qwen3-Coder-480B-A35B delivers SOTA advancements in agentic coding and code tasks, matching or outperforming Claude Sonnet-4, GPT-4.1, and Kimi K2. The 480B model achieves a 61.8% on Aider Polygot and supports a 256K token context, extendable to 1M tokens.