Nvidia Drops Nemotron 3 Super Amid $26 Billion Open-Model AI Bet—America's Answer to Qwen?

Nvidia Drops Nemotron 3 Super Amid $26 Billion Open-Model AI Bet—America's Answer to Qwen?

Source: Decrypt

Published:18:46 UTC

BTC Price:$69918.4

#nvidia #ai #opensourceai

Analysis

Price Impact

High

Nvidia's significant investment in open-source ai and the launch of nemotron 3 super could lead to increased demand for their hardware (gpus) if developers adopt this new model widely, especially given its performance claims and lower cost for autonomous agents. this could boost nvidia's stock price.

Trustworthiness

High

Price Direction

Bullish

The news indicates a strong strategic push by nvidia to dominate the open-source ai landscape, a move designed to secure its hardware dominance and counter competition. successful adoption of nemotron 3 super by developers and enterprises would directly translate to higher demand for nvidia's ai chips, driving the stock price up.

Time Effect

Long

This is a strategic bet on the future of ai and open-source development. the $26 billion investment over five years suggests a long-term vision. the impact on nvidia's stock will likely unfold over months and years as the adoption of nemotron 3 super and the broader open-source ai ecosystem matures.

Original Article:

Article Content:

In brief Nvidia launched Nemotron 3 Super, a 120B open-weight AI model optimized for autonomous agents and ultra-long context tasks. The hybrid Mamba-Transformer MoE architecture delivers faster reasoning and over 5× throughput while running at 4-bit precision. Nvidia’s $26 billion investment into open-source AI wants to counter China's rise in the field. Nvidia just shipped Nemotron 3 Super , a 120-billion-parameter open-weight model built to do one thing well: run autonomous AI agents without bleeding your compute budget dry. That's not a small problem. Multi-agent systems generate a lot more tokens than a normal chat—every tool call, reasoning step, and slice of context gets re-sent from scratch. As a result, costs explode, models tend to drift, and the agents slowly forget what they were supposed to be doing in the first place… or at least decrease in accuracy. Nemotron 3 Super is Nvidia's answer to all of that. The model runs 12 billion active parameters out of 120 billion total, using a mixture-of-experts (MoE) design that keeps inference cheap while retaining the reasoning depth complex workflows need. It packs a 1-million-token context window, so agents can hold an entire codebase, or nearly 750,000 words in memory before collapsing. To build its model, Nvidia combined three components that rarely appear together in the same architecture: Mamba-2 state-space layers—a faster, memory-efficient alternative to attention for handling long token streams—along with Transformer attention layers for precise recall, and a new “Latent MoE” design that compresses token embeddings before routing them to experts. That allows the model to activate four times as many specialists at the same compute cost. Introducing NVIDIA Nemotron 3 Super 🎉 Open 120B-parameter (12B active) hybrid Mamba-Transformer MoE model Native 1M-token context Built for compute-efficient, high-accuracy multi-agent applications Plus, fully open weights, datasets and recipes for easy customization and… pic.twitter.com/kMFI23noFc — NVIDIA AI Developer (@NVIDIAAIDev) March 11, 2026 The model was also pretrained natively in NVFP4, Nvidia’s 4-bit floating-point format. In practice, that means the system learned to operate accurately within 4-bit arithmetic from the very first gradient update, rather than being trained at high precision and compressed afterward, which often causes models to lose accuracy. For context, a model’s precision is measured in bits. Full precision, known as FP32, is the gold standard—but it is also extremely expensive to run at scale. Developers often reduce precision to save compute while trying to preserve useful performance. Think of it like shrinking a 4K image down to 1080p: The picture still looks the same at a glance, just with less detail. Normally, dropping from 32-bit precision all the way to 4-bit would cripple a model’s reasoning ability. Nemotron avoids that problem by learning to operate at low precision from the start, instead of being squeezed into it later.  Compared to its own predecessor, Nemotron 3 Super delivers more than five times the throughput. Against external rivals, it's 2.2x faster than OpenAI's GPT-OSS 120B on inference throughput, and 7.5x faster than Alibaba's Qwen3.5-122B. We ran our own quick test. The reasoning held up well, including on prompts that were deliberately vague, badly worded, or based on wrong information. The model caught small errors in context without being asked to, handled math and logic problems cleanly, and didn't fall apart when the question itself was slightly off. The full training pipeline is public: weights on Hugging Face, 10 trillion curated pretraining tokens seen over 25 trillion total during training, 40 million post-training samples, and reinforcement learning recipes across 21 environment configurations. Perplexity, Palantir, Cadence, and Siemens are already integrating the model in their workflows. The $26 billion bet The model may be one piece of a larger strategy. A 2025 financial filing shows Nvidia plans to spend $26 billion over the next five years building open-weight AI models. Executives confirmed it, too. Bryan Catanzaro, VP of applied deep learning research, told Wired the company recently finished pretraining a 550-billion-parameter model. Nvidia released its first Nemotron model back in November 2023, but that filing makes clear this is no longer a side project. The investment is strategic considering Nvidia's chips are still the default infrastructure for training and running frontier models. Models tuned to its hardware give customers a built-in reason to stay on Nvidia despite efforts from competitors to use other hardware. But there's a more urgent pressure behind the move: America is losing the open-source AI race, and losing it fast. Chinese open models went from barely 1.2% of global open-model usage in late 2024 to roughly 30% by the end of 2025, according to research by OpenRouter and Andreessen Horowitz. Alibaba's Qwen overtook Meta's Llama as the most-used self-hosted open-source model, according to Runpod . American companies including Airbnb adopted it for customer service. Startups worldwide are building on top of it. Beyond market share, that kind of adoption creates infrastructure dependencies that are hard to reverse. While U.S. giants like OpenAI, Anthropic, and Google keep their best models locked behind APIs, Chinese labs from DeepSeek to Alibaba have been flooding the open ecosystem. Meta was the one major American player competing in open source with Llama, but Zuckerberg recently signaled the company might not make future models fully open. The gap between "best proprietary model" and "best open model" used to be massive—and in America's favor. That gap is now very small, and the open side of the ledger is increasingly Chinese. Incredible graph. In just one year, China completely overtook the U.S. in free AI models. Not a single U.S. model in the top 5 today when last year the top 3 were all American. pic.twitter.com/34ErpBv8rg — Arnaud Bertrand (@RnaudBertrand) October 14, 2025 There's also a hardware threat underneath all of this. A new DeepSeek model is widely expected to drop soon, and it's rumored to have been trained entirely on chips made by Huawei—a sanctioned Chinese company. If that's confirmed, then it would give developers around the world, particularly in China, a concrete reason to start testing Huawei's hardware. China’s Ziphu AI is already doing that. That's the scenario Nvidia most needs to prevent: Chinese open models and Chinese chips building an ecosystem that doesn't need Nvidia at all. Daily Debrief Newsletter Start every day with the top news stories right now, plus original features, a podcast, videos and more. Your Email Get it! Get it!