NVIDIA and Marvell alliance scales AI-RAN infrastructure


Network operators face an ongoing challenge to monetise heavy capital expenditures in 5G while simultaneously preparing for the 6G era. The legacy model of providing basic connectivity faces diminishing returns, forcing operators to pivot towards intelligent platforms and exposed network APIs. 

Against this commercial backdrop, NVIDIA and Marvell Technology have formed a partnership designed to link Marvell into the NVIDIA AI factory and the wider AI-RAN ecosystem using the NVIDIA NVLink Fusion platform. This collaboration gives customers who build on NVIDIA architectures greater choice and flexibility when developing next-generation hardware environments.

Backing this hardware integration is a massive financial endorsement, as NVIDIA has invested $2 billion into Marvell. Financial analysts monitoring the telecoms sector often look for concrete monetary commitments to validate hardware alliances. This massive injection of capital guarantees that Marvell has the resources to scale its production of custom silicon and networking components.

Beyond the direct capital injection, the two technology giants plan to collaborate extensively on silicon photonics technology. Advancements in optical interconnects directly address the growing need for high-speed, low-latency networking architectures necessary to support heavy AI workloads at the edge.

Navigating heterogeneous compute environments

The foundation of this joint effort relies on the NVIDIA NVLink Fusion platform, a rack-scale architecture that lets developers create semi-custom AI setups using the existing NVLink ecosystem.

Within this arrangement, Marvell takes responsibility for supplying custom XPUs alongside scale-up networking gear that maintains strict compatibility with NVLink Fusion. NVIDIA, meanwhile, will deliver the underlying support technologies, specifically the Vera CPU, ConnectX NICs, BlueField DPUs, Spectrum-X switches, the NVLink interconnect itself, and the rack-scale AI compute.

This specific division of hardware components creates a heterogeneous AI infrastructure for engineers building custom XPUs, ensuring complete compatibility with NVIDIA systems. Operators can then seamlessly integrate their edge deployments with NVIDIA’s GPU, LPU, networking, and storage platforms, tapping into the broader global supply chain ecosystem that NVIDIA maintains. The inclusion of BlueField DPUs, for instance, allows operators to offload heavy security and networking tasks from the main processors, freeing up valuable compute cycles for revenue-generating AI applications.

Matt Murphy, Chairman and CEO of Marvell, said: “Our expanded partnership with NVIDIA reflects the growing importance of high-speed connectivity, optical interconnect, and accelerated infrastructure in scaling AI.

“By connecting Marvell’s leadership in high-performance analog, optical DSP, silicon photonics, and custom silicon to NVIDIA’s expanding AI ecosystem through NVLink Fusion, we are enabling customers to build scalable, efficient AI infrastructure.”

Driving business impact and enterprise revenue

The ability to deploy specialised compute nodes within the Radio Access Network changes the economic model of cellular sites. By partnering to turn the telecommunication network into a distributed AI infrastructure using the NVIDIA Aerial AI-RAN framework for 5G and 6G, operators can host enterprise workloads directly at the cell tower.

This edge capability establishes new revenue streams entirely disconnected from consumer smartphone subscriptions. Enterprises require low-latency processing for automated manufacturing, autonomous logistics, and real-time video analytics. Network operators can lease this edge compute capacity to enterprises, thereby driving up Average Revenue Per User (ARPU) and reducing enterprise churn.

Deploying private 5G solutions provides another direct application for this newly-announced infrastructure. The integration of Marvell’s custom silicon and NVIDIA’s rack-scale compute equips operators with the precise hardware combination necessary to secure highly lucrative private networking contracts. The data never leaves the factory floor, satisfying heavy data sovereignty and compliance regulations.

Jensen Huang, Founder and CEO of NVIDIA, commented: “The inference inflection has arrived. Token generation demand is surging, and the world is racing to build AI factories. Together with Marvell, we are enabling customers to leverage NVIDIA’s AI infrastructure ecosystem and scale to build specialised AI compute.”

Telecom operators cannot drop rack-scale AI compute into existing mobile switching centres without encountering friction. Integrating NVLink Fusion hardware requires extensive coordination with legacy Operations Support Systems (OSS) and Business Support Systems (BSS). Legacy BSS/OSS platforms were primarily designed to meter voice minutes and megabytes, not continuous API calls or dynamic edge compute provisioning. Overhauling these billing engines to handle AI-RAN monetisation represents a massive and multi-year undertaking.

Furthermore, spectrum management becomes increasingly complex under this model. Running multi-tenant AI workloads concurrently with high-priority 5G baseband processing demands precise resource isolation. Directors often estimate that ensuring exact resource isolation can consume ten percent of edge computing overhead.

Operators must navigate multi-cloud environments, ensuring that containerised network functions interoperate smoothly with enterprise AI applications sharing the same physical silicon. While Marvell’s XPUs and NVIDIA’s Vera CPUs provide the processing variety needed for these distinct tasks, the software orchestration layer remains a daunting hurdle for IT directors to clear.

When operators expose these new edge capabilities through network APIs, they invite third-party developers to write applications integrated directly with the radio network. However, creating a developer-friendly API portal demands heavy investment in software infrastructure.

Legacy systems frequently lack the agility to authenticate, meter, and bill thousands of concurrent API requests originating from enterprise software. Upgrading these backend systems requires navigating a complex web of vendor lock-in and customised software deployments. IT directors face the difficult task of modernising the billing infrastructure without causing service interruptions for the existing subscriber base.

Vendor context within the telecoms ecosystem

Integrating these advanced computing solutions naturally brings network operators into closer alignment with major hyperscalers and specialist hardware manufacturers.

The telecoms industry relies heavily on a small group of dominant radio vendors. Injecting NVIDIA and Marvell hardware into this ecosystem forces traditional equipment manufacturers to adapt their proprietary interfaces. The industry-wide push for Open RAN architecture has paved the way for this exact scenario, allowing third-party silicon to handle radio processing and enterprise compute simultaneously.

By embedding the NVIDIA Aerial framework into the physical network, operators essentially become highly distributed extensions of the broader AI ecosystem. This dynamic requires careful commercial negotiation.

Wholesale carriers and network operators must actively retain control over their subscriber data and network telemetry, rather than merely serving as an unmonetised conduit for third-party edge services. The hardware compatibility ensured by the NVLink ecosystem vastly simplifies the physical deployment and supply chain logistics, yet the commercial agreements governing these multi-vendor edge deployments remain intricate.

Operating an AI-RAN environment also necessitates a software-centric mindset, heavily resembling DevOps methodologies found within hyperscale cloud providers. The workforce must learn to manage computing capacity dynamically, treating the network as a programmable platform rather than a static array of cell towers. Only through this extensive retraining can operators fully monetise the sophisticated hardware investments enabled by this partnership.

See also: Why Nvidia and Nokia are backing AI-RAN specialist ODC

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the IoT Tech Expo and Cyber Security & Cloud Expo. Click here for more information.

Telecoms is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.