Scaling enterprise AI requires overcoming architectural oversights that often stall pilots before production, a challenge that goes far beyond model selection. While generative AI prototypes are easy to spin up, turning them into reliable business assets involves solving the difficult problems of data engineering and governance.
Ahead of AI & Big Data Global 2026 in London, Franny Hsiao, EMEA Leader of AI Architects at Salesforce, discussed why so many initiatives hit a wall and how organisations can architect systems that actually survive the real world.
The ‘pristine island’ problem of scaling enterprise AI
Most failures stem from the environment in which the AI is built. Pilots frequently begin in controlled settings that create a false sense of security, only to crumble when faced with enterprise scale.
“The single most common architectural oversight that prevents AI pilots from scaling is the failure to architect a production-grade data infrastructure with built-in end to end governance from the start,” Hsiao explains.
“Understandably, pilots often start on ‘pristine islands’ – using small, curated datasets and simplified workflows. But this ignores the messy reality of enterprise data: the complex integration, normalisation, and transformation required to handle real-world volume and variability.”
When companies attempt to scale these island-based pilots without addressing the underlying data mess, the systems break. Hsiao warns that “the resulting data gaps and performance issues like inference latency render the AI systems unusable—and, more importantly, untrustworthy.”
Hsiao argues that the companies successfully bridging this gap are those that “bake end-to-end observability and guardrails into the entire lifecycle.” This approach provides “visibility and control into how effective the AI systems are and how users are adopting the new technology.”
Engineering for perceived responsiveness
As enterprises deploy large reasoning models – like the ‘Atlas Reasoning Engine’ – they face a trade-off between the depth of the model’s “thinking” and the user’s patience. Heavy compute creates latency.
Salesforce addresses this by focusing on “perceived responsiveness through Agentforce Streaming,” according to Hsiao.
“This allows us to deliver AI-generated responses progressively, even while the reasoning engine performs heavy computation in the background. It’s an incredibly effective approach for reducing perceived latency, which often stalls production AI.”
Transparency also plays a functional role in managing user expectations when scaling enterprise AI. Hsiao elaborates on using design as a trust mechanism: “By surfacing progress indicators that show the reasoning steps or the tools being used, as well images like spinners and progress bars to depict loading states, we don’t just keep users engaged; we improve perceived responsiveness and build trust.
“This visibility, combined with strategic model selection – like choosing smaller models for fewer computations, meaning faster response times – and explicit length constraints, ensures the system feels deliberate and responsive.”
Offline intelligence at the edge
For industries with field operations, such as utilities or logistics, reliance on continuous cloud connectivity is a non-starter. “For many of our enterprise customers, the biggest practical driver is offline functionality,” states Hsiao.
Hsiao highlights the shift toward on-device intelligence, particularly in field services, where the workflow must continue regardless of signal strength.
“A technician can photograph a faulty part, error code, or serial number while offline. Then an on-device LLM can then identify the asset or error, and provide guided troubleshooting steps from a cached knowledge base instantly,” explains Hsiao.
Data synchronisation happens automatically once connectivity returns. “Once a connection is restored, the system handles the ‘heavy lifting’ of syncing that data back to the cloud to maintain a single source of truth. This ensures that work gets done, even in the most disconnected environments.”
Hsiao expects continued innovation in edge AI due to benefits like “ultra-low latency, enhanced privacy and data security, energy efficiency, and cost savings.”
High-stakes gateways
Autonomous agents are not set-and-forget tools. When scaling enterprise AI deployments, governance requires defining exactly when a human must verify an action. Hsiao describes this not as dependency, but as “architecting for accountability and continuous learning.”
Salesforce mandates a “human-in-the-loop” for specific areas Hsiao calls “high-stakes gateways”:
“This includes specific action categories, including any ‘CUD’ (Creating, Uploading, or Deleting) actions, as well as verified contact and customer contact actions,” says Hsiao. “We also default to human confirmation for critical decision-making or any action that could be potentially exploited through prompt manipulation.”
This structure creates a feedback loop where “agents learn from human expertise,” creating a system of “collaborative intelligence” rather than unchecked automation.
Trusting an agent requires seeing its work. Salesforce has built a “Session Tracing Data Model (STDM)” to provide this visibility. It captures “turn-by-turn logs” that offer granular insight into the agent’s logic.
“This gives us granular step-by-step visibility that captures every interaction including user questions, planner steps, tool calls, inputs/outputs, retrieved chunks, responses, timing, and errors,” says Hsiao.
This data allows organisations to run ‘Agent Analytics’ for adoption metrics, ‘Agent Optimisation’ to drill down into performance, and ‘Health Monitoring’ for uptime and latency tracking.
“Agentforce observability is the single mission control for all your Agentforce agents for unified visibility, monitoring, and optimisation,” Hsiao summarises.
Standardising agent communication
As businesses deploy agents from different vendors, these systems need a shared protocol to collaborate. “For multi-agent orchestration to work, agents can’t exist in a vacuum; they need common language,” argues Hsiao.
Hsiao outlines two layers of standardisation: orchestration and meaning. For orchestration, Salesforce is adopting open-source standards like MCP (Model Context Protocol) and A2A (Agent to Agent Protocol).”
“We believe open source standards are non-negotiable; they prevent vendor lock-in, enable interoperability, and accelerate innovation.”
However, communication is useless if the agents interpret data differently. To solve for fragmented data, Salesforce co-founded OSI (Open Semantic Interchange) to unify semantics so an agent in one system “truly understands the intent of an agent in another.”
The future enterprise AI scaling bottleneck: agent-ready data
Looking forward, the challenge will shift from model capability to data accessibility. Many organisations still struggle with legacy, fragmented infrastructure where “searchability and reusability” remain difficult.
Hsiao predicts the next major hurdle – and solution – will be making enterprise data “‘agent-ready’ through searchable, context-aware architectures that replace traditional, rigid ETL pipelines.” This shift is necessary to enable “hyper-personalised and transformed user experience because agents can always access the right context.”
“Ultimately, the next year isn’t about the race for bigger, newer models; it’s about building the orchestration and data infrastructure that allows production-grade agentic systems to thrive,” Hsiao concludes.
Salesforce is a key sponsor of this year’s AI & Big Data Global in London and will have a range of speakers, including Franny Hsiao, sharing their insights during the event. Be sure to swing by Salesforce’s booth at stand #163 for more from the company’s experts.
See also: Databricks: Enterprise AI adoption shifts to agentic systems
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information.
AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.



