Next-generation AI assistants being developed in the Apple ecosystem and by chipmakers like Qualcomm, but early reports suggest they are being designed with limits in place.
Tom’s Guide has described early versions of these assistants as capable of navigating apps, carrying out bookings, and managing tasks in services. For instance a private beta agentic system completed tasks like booking services or posting content in apps. In one test, it moved through an app workflow and reached a payment screen before asking the user for confirmation.
AI agents are being built with approval checkpoints. Sensitive actions, especially those tied to payments or account changes, require user confirmation before they are completed. The “human-in-the-loop” model lets the system prepare an action, but leaves approval to the user. Research linked to Apple’s AI work has explored ways to ensure systems pause before taking actions users did not explicitly request.
Banking apps already require confirmation for transfers. The same idea is now being applied to AI-driven actions in multiple services.
Limits and control
A control layer comes from restricting what the AI can access. Rather than providing the system full access to apps and data, businesses are establishing limits, such as which apps the AI can interact with and when actions can be triggered.
In practice, this means the AI may be able to draft a purchase or prepare a booking, but not finalise it without approval. It also means the system cannot move freely in all services unless it has been granted permission.
According to Tom’s Guide, the facility is for privacy. If data remains on the device, it eliminates the need to send sensitive information to external servers.
In areas like payments, AI systems are expected to work with partners that already have strict rules in place. In one reported example, payment providers’ services are being integrated to provide secure authentication before transactions are completed, though such safeguards are still under development. The existing systems act as an additional layer of oversight. They can set transaction limits or require extra verification.
Much of the discussion around AI governance has focused on enterprise use. That includes areas like cybersecurity and large-scale automation. The consumer side introduces a different challenge and companies must design controls that work for everyday users. That means clear approval steps and built-in privacy protections.
Autonomy with boundaries
As AI gains the ability to carry out actions, the risks become greater as errors can lead to financial loss or data exposure.
By placing controls at multiple points, including approval and infrastructure, companies are trying to manage those risks.
The approach may shape how agentic AI develops in the near term. Rather than aiming for full independence, companies appear focused on controlled environments where the risks can be managed.
(Photo by Junseong Lee)
See also: Agentic AI’s governance challenges under the EU AI Act in 2026
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information.
AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.



