NVIDIA Corporation
NVDA · United States
Owns the CUDA software standard that AI and science code runs on, making its GPUs the only hardware that code works with.
NVIDIA owns CUDA, a parallel-computing instruction set that AI training frameworks like PyTorch and inference engines like TensorRT are written against, which means the code a team builds on NVIDIA hardware only runs on NVIDIA hardware. Every new CUDA kernel a research team writes or production engineer ships deepens that dependency, because switching to AMD ROCm or Intel OneAPI requires rewriting and re-validating each of those kernels — a process production teams measure in engineering quarters, not days. That accumulated rewrite cost is what forces the next hardware purchase to also be NVIDIA, so demand for the chips is effectively locked in by the software that runs on them. The one thing NVIDIA cannot control is how many chips it can actually ship: TSMC, the only foundry that can fabricate at the 4nm and 5nm densities CUDA's parallel thread execution requires at AI scale, shares that capacity with Apple and others, so the ceiling on how many H100s reach customers in any quarter is set by TSMC's production schedule, not by what NVIDIA or its customers are willing to spend.
How does this company make money?
Nvidia earns money each time a chip is sold. Data center H100 GPUs sell for between $25,000 and $40,000 per unit to distributors and hardware makers. GeForce gaming cards sell for between $300 and $1,600 per unit depending on the model. On top of hardware sales, Nvidia also collects software licensing fees from CUDA development tools and AI framework optimizations.
What makes this company hard to replace?
Production code written against CUDA takes months to rewrite for AMD ROCm or Intel OneAPI, and that rewrite has to happen before the new hardware is even useful. TensorRT, Nvidia's inference engine, gets embedded deeply into live AI systems, making the cost of replacing it something teams measure in engineering quarters. Game developers face a separate lock: GeForce Experience software and RTX ray-tracing pipelines require going through Nvidia's RTX hardware certification process, which ties future game releases to RTX hardware.
What limits this company?
TSMC controls how many H100 and A100 chips Nvidia can ship in any given quarter. That factory capacity is shared with Apple and other large customers, and Nvidia owns no fabrication plant of its own, so no amount of spending by Nvidia can directly add more chips to the supply.
What does this company depend on?
Nvidia cannot operate without TSMC's foundry capacity at 4nm and 5nm process nodes, which produces 100% of its advanced GPU chips. It also depends on Samsung and SK Hynix for HBM3 high-bandwidth memory that goes inside those chips, Taiwan Semiconductor Manufacturing substrate materials, and Advanced Semiconductor Engineering for chip packaging services.
Who depends on this company?
Microsoft Azure and AWS run GPT and Claude model training on Nvidia hardware — if Nvidia stopped delivering, those teams would face months-long delays scaling their AI services. Autodesk and Adobe users would lose real-time 3D rendering and video editing acceleration inside those creative tools. Tesla and Waymo would slow their autonomous vehicle work because the simulation computing power they use to train driving neural networks runs on Nvidia GPUs.
How does this company scale?
The CUDA software platform costs nothing extra to copy across millions of GPU installations once it has been built — every new developer who writes code against CUDA deepens the lock without Nvidia spending another dollar. What does not scale freely is the physical chip supply: TSMC's advanced-node wafer capacity sets a hard ceiling that cannot be lifted quickly no matter how much money Nvidia or its customers are willing to spend.
What external forces can significantly affect this company?
U.S. Commerce Department export controls block sales of A100 and H100 chips to China, cutting off more than 20% of potential data center GPU revenue. Taiwan geopolitical tensions are a permanent background risk because TSMC, which makes 100% of Nvidia's advanced chips, is located there. The European Union's AI Act may also impose compliance requirements that restrict how certain GPU compute capabilities can be used for AI model training in Europe.
Where is this company structurally vulnerable?
If a major cloud provider or AI lab — say, one of the teams training large models on AWS or Google infrastructure — rewrote their entire training stack on a non-CUDA architecture and then published those rewritten libraries publicly, they would absorb the painful switching cost once and hand the result to every other lab for free. That would remove the main reason most teams stay on Nvidia hardware, and defections would accelerate from there.