DeepSeek, Qwen and the New Price War in AI
DeepSeek and Qwen set new standards in open weights, Anthropic shows agent effects, and Microsoft and Apple deliver fresh signals.
Inhaltsverzeichnis
Today, once again, it becomes clear: in AI, it’s no longer just about who builds the biggest model, but who uses it the smartest, cheapest, and most practically. Open weights, pricing, agent economics, and big-tech strategies are all pointing in the same direction today: the market is maturing — and getting more complex.
For you, that means: more choice among LLMs, more pressure on prices, and more movement among the players that previously seemed entrenched. In short: if you’re building AI products today, you shouldn’t just read benchmarks — you should also read the bills.
🚀 DeepSeek V4 makes a statement on price
With V4-Pro and V4-Flash, DeepSeek is releasing two new open-weights models that already sound like “too much of a good thing” on paper: up to 1.6 trillion parameters and a context window of one million tokens. Even more interesting than the raw numbers is the price: DeepSeek undercuts many offerings from OpenAI, Google, and Anthropic by a wide margin. This is not just a PR win, but a signal to the entire LLM market.
Why does this matter? Because two trends are intersecting here: on the one hand, large models are becoming more capable; on the other, inference is increasingly becoming a cost factor. If a provider can run a model of this size more cheaply, it shifts the economics of chatbots, analytics tools, and agent workflows. The paper also includes details on training data, distillation, and hardware — exactly the places where real competitiveness is created, not just marketing.
Source: The Decoder
🧠 Qwen3.6-27B beats its own XXL predecessor
With Qwen3.6-27B, Alibaba shows that size alone isn’t everything after all. The new open-source model with 27 billion parameters is said to outperform its 15-times-larger predecessor in coding benchmarks — consistently. For developers, that’s the nicest kind of surprise: less compute load, lower costs, but better results.
The finding fits a broader pattern in the open-source LLM space: good architecture, clean training, and focused optimization are often more important than simply inflating parameter counts. For use in coding assistants, local tools, or enterprise setups, this is especially interesting because smaller models are easier to host and cheaper to infer. So if you thought only the very largest models could produce usable code, today is a good day to reconsider.
Source: The Decoder
🤖 Anthropic shows how agents can distort the market
Anthropic let 69 AI agents trade on an internal marketplace for a week to test how different model strengths affect negotiations. Result: stronger models get better deals — and the disadvantaged side doesn’t notice. At first glance, this sounds like an academic thought experiment, but in reality it is quite close to the future of agentic AI and automated transactions.
Why does this matter? Because AI agents don’t just complete tasks — they can also negotiate prices, terms, and priorities. Once models are buying, booking, or negotiating for humans, a new economic imbalance emerges: whoever has the better model gets better terms. That’s not a bug, but a possible feature of the market — and precisely why it’s also a topic for regulation, transparency, and fairness. One small consolation: the agents were polite, presumably thanks to excellent prompting.
Source: The Decoder
🎮 Microsoft’s new Xbox strategy: “We are Xbox”
Microsoft’s gaming division is getting a new brand strategy, and the phrase “We are Xbox” suggests where things are headed: less hardware identity, more platform and ecosystem thinking. According to the report, the new leadership duo is even considering making games exclusive again. For a brand that has increasingly thought cross-platform in recent years, that’s an interesting signal.
For AI Radar, this is especially interesting because it shows how major tech companies are currently reorganizing their positions — often in parallel with AI strategies, cloud offerings, and subscription models. Exclusivity can create attention and loyalty in the short term, but it can also cost reach in the long term. Microsoft therefore appears to be trying to find a new balance between brand, platform, and monetization. And as always: when corporations say it’s only about strategy, it’s usually about power too.
Source: heise
🍏 Tim Cook is stepping down: What does this mean for Apple?
Tim Cook is expected to step down as CEO of Apple in September, with John Ternus apparently lined up as his successor. This is more than a personnel change, because Cook has guided Apple into a phase of enormous stability: billions in revenue, strong margins, and an extremely controlled ecosystem. But that very ecosystem is under more pressure today than at the start of his tenure — from regulation, App Store debates, and the growing AI race.
For Apple, the transition comes at a delicate moment. Ternus takes over a company that is financially thriving, but strategically needs new answers: How does Apple integrate AI meaningfully into its products? How does it respond to changing platform rules? And how does it stay relevant when hardware alone is no longer enough? The leadership change may therefore be less of a break than a test: can Apple steer its next era as elegantly as the last one?
Source: TechCrunch
🛠️ Tool tip of the day: Test and compare Qwen locally
If today you’re mainly interested in the open-source side, it’s worth taking a look at tools that let you quickly try out, compare, and integrate models like Qwen or DeepSeek into your own workflows. Especially for coding and inference tests, you don’t want to spin up a full infrastructure every time. Useful here are local model runners, API abstractions, and benchmark setups that let you directly compare price, speed, and quality.
Don’t want to miss any news? Subscribe to the newsletter