What MWC 2026 Actually Proved


AI-native networks have been a recurring talking point at Mobile World Congress for years. What made MWC 2026 in Barcelona different was the evidence. A cascade of announcements from the world’s biggest telecom vendors, chipmakers, and operators didn’t just reiterate the vision for AI-RAN–they delivered field trial results, commercial product launches, open-source toolkits, and a multi-operator coalition committing to build 6G on AI-native foundations. 

For enterprise and IT decision-makers, the signal is clear: the architectural shift happening in telecom infrastructure will soon reshape how connectivity is delivered, managed, and monetised.

Nvidia and a global coalition lock in on AI-RAN and 6G

The week’s most consequential announcement thus far came from Nvidia, which secured commitments from more than a dozen global operators and technology companies–including BT Group, Deutsche Telekom, Ericsson, Nokia, SK Telecom, SoftBank, T-Mobile, Cisco, and Booz Allen–to build 6G on open, secure, and AI-native software-defined platforms. 

The initiative, framed as a shared commitment to ensure future connectivity infrastructure is intelligent, resilient and trustworthy, is backed by ongoing collaborations with governments across the US, UK, Europe, Japan, and Korea.

Jensen Huang, Nvidia’s founder and CEO, set the stakes plainly: “AI is redefining computing and driving the largest infrastructure buildout in human history–and telecommunications is next.” The company is a founding member of the AI-RAN Alliance, which now has over 130 participating companies, and has joined the FutureG Office-led OCUDU Initiative in the US to accelerate open, software-defined, AI-native 6G architectures.

Nvidia also released a suite of open-source tools targeting network operators: a 30-billion-parameter Nemotron Large Telco Model (LTM), developed with AdaptKey AI and fine-tuned on telecom datasets including industry standards and synthetic logs; an open-source guide co-published with Tech Mahindra for building AI agents that reason like NOC engineers; and new Nvidia Blueprints for RAN energy efficiency and network configuration. 

The energy blueprint integrates VIAVI’s TeraVM AI RAN Scenario Generator to simulate energy-saving policies in a closed loop before touching live networks. Real-world adoption of the network configuration blueprint is already underway–Cassava Technologies is deploying it for an autonomous network platform across Africa’s multi-vendor mobile environment, while NTT DATA is using it with a tier one operator in Japan to manage traffic surges after network outages.

Nokia and operators take AI-RAN over the air

Nokia announced significant progress in its strategic AI-RAN partnership with Nvidia, completing functional tests of its anyRAN software on NVIDIA’s GPU-accelerated AI-RAN platform with T-Mobile US, Indosat Ooredoo Hutchison (IOH), and SoftBank Corp. The results matter because they moved validation out of controlled lab environments and into live, over-the-air conditions.

At T-Mobile’s AI-RAN Innovation Centre in Seattle, Nokia’s AirScale Massive MIMO radio in the 3.7GHz band ran concurrent AI and RAN workloads–including video streaming, generative AI queries, and AI-powered video captioning–on a single Nvidia Grace Hopper 200 server alongside commercial 5G. 

IOH achieved Southeast Asia’s first AI-RAN-powered Layer 3 5G call at MWC, with AI and RAN workloads running simultaneously on shared GPU infrastructure. As IOH President Director and CEO Vikram Sinha put it: “This is not just about proving that the technology works. It is about ensuring that every Indonesian, wherever they are, can benefit from the digital and AI era.”

SoftBank’s demonstration went further, showing how spare compute capacity identified by its AITRAS Orchestrator can run third-party AI workloads–a glimpse of how operators could eventually monetise RAN infrastructure beyond connectivity. 

Nokia’s expanded AI-RAN ecosystem now includes Dell Technologies, Quanta, Supermicro, and Red Hat OpenShift for orchestration, giving operators a widening range of commercial off-the-shelf options. Nokia shares rose 5.4% on the day of the announcement.

Ericsson takes a different road to AI-native networks

Ericsson arrived at MWC 2026 with a distinctly different approach–and it is one worth understanding. While Nokia has bet on Nvidia GPU acceleration (backed by a US$1 billion Nvidia investment), Ericsson unveiled ten new AI-ready radios built on its own purpose-built silicon, featuring neural network accelerators embedded directly into its Massive MIMO hardware. No NVIDIA GPUs required.

The portfolio includes AI-managed beamforming, AI-powered outdoor positioning, instant coverage prediction using AI models, and a latency-prioritised scheduler delivering up to seven times faster response times. Ericsson’s argument is built on total cost of ownership: custom silicon, it contends, delivers better TCO and power efficiency than external GPU hardware, with the added benefit of supply chain independence. 

Per Narvinger, head of Ericsson’s mobile networks business, has been direct that this view is unlikely to change. At MWC, Ericsson also announced a sweeping collaboration with Intel spanning compute, cloud technologies, and AI-driven RAN and packet core use cases, to accelerate ecosystem readiness for AI-native 6G. “6G is not merely an iteration of mobile technology. It is the infrastructure that will distribute AI across devices, the edge and the cloud,” said Ericsson President and CEO Börje Ekholm. 

Intel CEO Lip-Bu Tan framed the partnership as a path to open, power-efficient networks grounded in AI inference, with future Ericsson Silicon built on Intel’s most advanced process nodes.

SK Telecom, SoftBank, and the operator rebuild

Beyond the vendor announcements, two operators used MWC 2026 to articulate how deeply AI-RAN fits into their broader infrastructure strategies. 

SK Telecom CEO Jung Jai-hun outlined a full-stack AI-native rebuild–from its network core to customer service systems–including plans to upgrade its sovereign AI foundation model from 519 billion to over one trillion parameters, and to build a new AI data centre in Korea in collaboration with OpenAI. 

The company is also expanding autonomous network operations using AI to automate wireless quality management, traffic control, and network equipment operations, with AI-RAN technology central to improving speed and reducing latency.

SoftBank, meanwhile, demonstrated its Autonomous Agentic AI-RAN (AgentRAN) system at MWC in collaboration with Northeastern University’s INSI, Keysight Technologies, and zTouch Networks. 

The system uses SoftBank’s Large Telecom Model to translate natural-language operator goals into real-time 5G and 6G network configurations–a meaningful step toward networks that manage themselves based on intent rather than manual instruction.

A hardware ecosystem takes shape around AI-RAN

One of the clearest signs that AI-RAN is maturing from concept to commercial infrastructure is the breadth of hardware companies now building purpose-built products for it. At MWC 2026, Quanta Cloud Technology announced commercial on-the-shelf AI-RAN products supporting Nvidia ARC platforms and Nokia software. 

Supermicro extended support across the full Nvidia AI-RAN portfolio, including ARC-Pro and RTX 6000-based configurations. MSI unveiled its unified AI-vRAN platform with dynamic GPU allocation between 5G and AI workloads. 

Lanner Electronics launched its AstraEdge AI Server lineup–the ECA-6710 and ECA-5555–purpose-built to co-locate AI inference, RAN functions, and high-performance packet processing at cell sites. AMD, not to be left out, positioned its EPYC 8005 edge platform and Open Telco AI initiative at MWC as an alternative compute path for operators moving from AI pilots to production.

What this means beyond the network

For enterprise decision-makers, the implications of this week’s announcements extend beyond telecom infrastructure procurement. AI-RAN networks that evolve continuously through software–rather than requiring costly hardware refresh cycles–mean connectivity infrastructure increasingly resembles cloud infrastructure in its pace of change and flexibility. 

The embedding of GPU compute within the RAN opens the prospect of enterprise AI workloads running at the network edge, closer to where data is generated. And as Nvidia’s State of AI in Telecom report noted, 77% of respondents anticipate a significantly faster deployment timeline for AI-native wireless architecture than for previous network generations.

The architecture debate between Ericsson’s custom silicon path and Nokia-Nvidia’s GPU-accelerated approach is also worth watching–not because one will definitely win, but because it reflects a genuine question about where AI inference should sit in network hardware, and at what cost. That question will shape operator procurement decisions and vendor relationships for years.

What MWC 2026 made unmistakable is that AI-native networks are no longer a research agenda. The field trials are live, the hardware is shipping, and the coalitions are forming. The question for enterprises and operators alike is no longer whether this transition will happen–but how fast, and who leads it.

(Photo by )

See also: MWC 2026: SK Telecom lays out plan to rebuild its core around AI

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.