Designing for agentic AI is not about making it chat-friendly, but about designing intelligent systems that can work reliably, transparently, and predictably inside of workings that people already trust. The primary issue with chat-centric AI is that it enables decision-making under a facade of conversation.Designing for agentic AI is not about making it chat-friendly, but about designing intelligent systems that can work reliably, transparently, and predictably inside of workings that people already trust. The primary issue with chat-centric AI is that it enables decision-making under a facade of conversation.

Agentic UX Over "Chat": How to Design Multi-Agent Systems People Actually Trust

2025/11/28 03:38
12 min di lettura
Per feedback o dubbi su questo contenuto, contattateci all'indirizzo crypto.news@mexc.com.

When I was first tasked with integrating generative AI into Viamo’s IVR platform, serving millions of people across Africa and Asia’s emerging markets, it didn’t take me long to recognise that we couldn’t just stick a chat interface on it and call it a day, as much as that would simplify some of our technical and development challenges. To be clear, we were designing for people who rely on voice interfaces strictly because they need this kind of information about healthcare, agriculture, and finance, and they have little patience for AI that fails them or gives them the wrong information when they have limited time and limited bandwidth.

That project taught me a lesson about designing for AI that I think all designers should learn: Designing for agentic AI is not about making it chat-friendly, but about designing intelligent systems that can work reliably, transparently, and predictably inside of workings that people already trust. Over seven years of designing products, spanning fintech, logistics, and software platforms, I have realised that the most effective way of implementing AI is not about replacing human judgement, but about augmenting it in ways that people can, and ultimately, will trust.

The Fatal Flaw of Chat-First Thinking

There is a dangerous paradigm that has been ingrained in this industry because of its obsession with chat interfaces on AI products, and that is that everyone is trying to build a “ChatGPT for Y”. No one stops and says, “Well, actually, just because we can build this and chat is a part of it, that doesn’t actually have anything to do with whether or not chat interaction is actually what we need on this.”

That’s not necessarily true. Chat is perfect for open-ended exploration and creative tasks that involve journeys as much as they involve destinations. But most business tasks demand accuracy, auditability, and repeatability. When designing the supplier interface for Waypoint Commodities, a system that deals with million-dollar fertiliser and chemical trade transactions, users didn’t require a user-friendly chat interface that could facilitate exploratory conversations about their transactions. They required interfaces that enabled AI systems to point out errors, identify optimal routes, and highlight compliance concerns without clouding critical transactions with any uncertainty or vagueness.

The primary issue with chat-centric AI is that it enables decision-making under a facade of conversation. Users can’t easily inspect what information was used, what was applied, and what was explored as an alternative option. Of course, this is acceptable for low-stakes interrogations, but disastrous for consequential choices. When designing our shipment monitoring system that tracked orders all through fulfilment, our Waypoint project was facing a challenge that required users to be assured AI messages about potential delays or market fluctuations weren’t based on fictitious observations but on actual facts explored and verified by AI itself.

Multi-Agent Systems Require Multi-Modal Interfaces

But then, a paradigm shift occurred within my thinking as I ceased designing only for one AI model and focused on designing for environments that consisted of multiple specialised AI entities operating together as a system.

It meant that we had to forgo entirely the paradigm of a one-window chat system. Instead, we built a multi-window interface through which multiple interaction methods could be used simultaneously. Quick facts would get immediate responses through AI voice output. Troubleshooting would involve a guided interaction through which AI would answer preliminary questions before redirecting the user to an expert system. Users searching for information on government facilities would get formatted replies that would cite sources accordingly. All these methods of interaction would have distinct visual and audio signals that would build user expectation accordingly.

These outcomes proved this strategy was valid, and we experienced improved accuracy of response of more than thirty per cent and heightened user engagement levels. Far more significantly, user abandonment levels decreased by twenty per cent as users ceased leaving conversations due to frustration of expectation mismatches. Since users understood they would be speaking to an AI system that had a certain body of knowledge as compared to waiting for human expertise, they adjusted their levels of enquiry and patience accordingly.

Designing for Verification, Not Just Automation

One of the most important principles of agentic UX design that I uphold is that ‘automation without verification’ is merely ‘technical debt masquerading as AI.’ There should be an ‘escape hatch’ provided alongside each AI ’agent’ used in a system, allowing ’users to validate its reasoning’ and ‘override its decision’ as and when required, ’not because one lacks faith’ in AI ’abilities,’ but because one ’respects the fact’ that ’users have final responsibility’ when ’in regulated environments or high-value transactions.’

When I was responsible for designing the admin dashboard for onboarding new users at Waypoint, we had a typical case of an automation project, the kind that would enable AI processing of incorporation documents, abstracting essential information, and automatically populating user profiles, thereby reducing user onboarding from several hours into just minutes. Of course, we understood that inaccuracies could potentially lead a company into a case of non-compliance or, worse, create fraudulent user profiles. So, we realised that we don’t need more accurate AI processing as a remedy to this problem, but rather to create a system of verification that would involve AI-generated user profiles, pending activation by human admins.

In our interface, we implemented the following system for indicating AI confidence levels for each field that was extracted:

  • Fields that had high levels of accuracy had black text colour and green tick marks;
  • Medium accuracy had orange colour, and a neutral symbol was used;
  • Fields that had low accuracy or missing information had red colour and a warning symbol.

To identify any errors that AI systems had missed, thirty seconds per profile was enough time for admins, as they got enough context through this system.

But the outcome was clear: we achieved a reduction of onboarding time of forty per cent over fully manual methods and greater accuracy than human or AI approaches alone. But more significantly, the admin personnel trusted this system because they could actually follow its logic. If there was any error on the AI’s part, that was pointed out quite easily through the verification page, and this helped build that all-important trust that enabled us to successfully roll out other AI functionality later on.

Progressive Disclosure of Agent Capabilities

Another subtle but essential area of agentic UX that most designers struggle with is providing users with information about what their agents can and cannot accomplish without overwhelming them with possibilities and potential applications of these capabilities. Such is especially true for systems that apply generative AI, and as we struggled at work at FlexiSAF Edusoft, where I developed these systems, they have applications that range widely but are unpredictable across different tasks or activities. Users, in this case students and parents, would need direction through often complex admission procedures and, on the other hand, would need to be informed of what responses could be provided by AI and what would require human interaction.

Our implementation provided capability hints based on interaction, meaning that as one used the system, they would be provided with examples of questions that the AI was strong at answering versus questions that could be more appropriately answered by the human resources people at the institution, meaning that as a user typed questions about application deadlines, they would see examples of questions that the AI was strong at answering, such as “When is the deadline for engineering applications?” as opposed to questions they could more effectively answer, for instance, “Can I be exempted from payment of application fees?”

Additionally, we enabled a feedback cycle through which users could express whether their question had been fully answered by an AI response or not. This was not only for improving the model, but it enabled a UX feature through which users could express that they required escalation of their issue and that they had been left stranded by an AI system. Relevant resources would be provided through this system, and, if not, they would be connected with human resources as well, thus resulting in a support ticket decrease but without sacrificing user satisfaction, as people would feel that they had been listened to and that they had not been left stranded through an AI system.

Transparency and Its Usefulness as a Trust-Building Factor

Trust, of course, is not established by improved AI algorithms but by transparent system design that allows a user to see what the system knows, why it made its conclusions, and where its limitations are. eHealth Africa, our project involving logistics and data storage of supply chains in the medical sector, made this one of its non-negotiables: ’If AI computer agents predicted the timing of vaccine shipments or indicated optimal routes for delivery, these justifications had to be explainable, because human decision-makers would be deciding whether rural clinics received life-saving commodities on time.’

To address this, we built what I call “reasoning panels” that provided output alongside AI suggestions. These reasoning panels did not display model details of its computations, only information about why it reached its recommendations, including road conditions, previous delivery times for this route, weather, and transport capacity available. The reasoning panels enabled field operatives to quickly ascertain if they had been getting outdated advice from AI or if they had neglected an essential, more recently available fact, such as a bridge closure, and made them indispensable and transparent rather than opaque decision-makers, as would be the case for black boxes.

Transparency was required, and this was true for failure as well as success. To this end, we built helpful failure states that would describe why the AI was unable to offer its recommendation as opposed to falling back on some generic error message. If, for instance, it was unable to offer an optimal route because it lacked connectivity information, this was explicitly communicated, and the user was informed of what they could do if they still had no route recommendation available.

Designing Handoffs Between Agents and Humans

But perhaps one of most undeveloped themes of agentic UX is that of handover, or exactly when and exactly how an AI agent is supposed to pass control of a system or of an interaction over to a human, whether that human be a colleague or be themselves a user of that system or interaction. This is precisely where most of the loss of trust occurs within multi-agent systems, and this was actually one of the first projects that I engaged that dealt explicitly with this issue, that of Bridge Call Block for Viamo, which was a system that transferred users from IVR interactions to human customer service reps.

Our protocol for context transfer was designed such that after every interaction of the AI, a structured summary was displayed on the screen of the operator before they could greet the user, and this summary contained what was asked by the user, what the AI intended to say, and why the AI escalated this call. There was no need for users to be asked to repeat what they had asked, and all interaction context was available to the operators, and this small detail of interaction design vastly improved average handling time and user satisfaction, as people felt they had been respected and that their time had not gone to waste.

The handoff from human to AI agent had to be considered equally carefully in reverse. In cases that called on the operators to refer their users back to the automated system, user interface functionality was used effectively by the operators to communicate adequate expectations of AI autonomy based on certain tasks that would enable users to be referred back to the automated system, as opposed to doing so with expected frustrations.

Principles of Pragmatic Design of Agentic UX

As a practitioner designing AI-enabled systems for many years, today I have formulated some pragmatic guidelines that help me design agentic UX effectively:

Firstly, design for the workflow, not for technology. Users don’t care whether they’re being helped through AI, rules, or human intelligence. They only care about whether they can accomplish their tasks effectively and conveniently. Begin by reverse-engineering from the target outcome, identifying areas of added value and added complexity due to AI-enabled agents, and then stop and proceed accordingly.

Secondly, define meaningful boundaries of AI-enabled agents. Users need to be aware of when they are leaving one realm of intelligence and entering other realms, such as the intelligence of retrieval, model intelligence, and human intelligence, and establish consistent visual and interaction guidelines accordingly, such that they don’t wonder what kind of answer they’re going to get and when they’re going to get it.

Thirdly, build verification into your workflow design respecting user expertise. AI systems should ideally help hasten decision-making by bringing up pertinent information and suggesting courses of action, but these should ultimately be made by human users who possess context unavailable to AI systems themselves. Designing decision verification flows into AI system user interfaces that facilitate this would be ideal.

Because of projects that have successfully secured funding, boosted engagement by definite increments, and processed user figures in the thousands, we didn’t succeed because we possessed, or attempted to create, sophisticated AI systems. It is because we provided these users, through our interface, the ability to comprehend what was happening on their end of this AI system and, through that, helped them trust it enough to accomplish increasingly complex tasks over time that has truly made them successful examples of agentic UX.

\n

\ \n

\ \ \n

\

Opportunità di mercato
Logo ConstitutionDAO
Valore ConstitutionDAO (PEOPLE)
$0,006794
$0,006794$0,006794
-2,07%
USD
Grafico dei prezzi in tempo reale di ConstitutionDAO (PEOPLE)
Disclaimer: gli articoli ripubblicati su questo sito provengono da piattaforme pubbliche e sono forniti esclusivamente a scopo informativo. Non riflettono necessariamente le opinioni di MEXC. Tutti i diritti rimangono agli autori originali. Se ritieni che un contenuto violi i diritti di terze parti, contatta crypto.news@mexc.com per la rimozione. MEXC non fornisce alcuna garanzia in merito all'accuratezza, completezza o tempestività del contenuto e non è responsabile per eventuali azioni intraprese sulla base delle informazioni fornite. Il contenuto non costituisce consulenza finanziaria, legale o professionale di altro tipo, né deve essere considerato una raccomandazione o un'approvazione da parte di MEXC.

Potrebbe anche piacerti

South Korea’s $657 Million Exit from Tesla Signals a Big Crypto Pivot

South Korea’s $657 Million Exit from Tesla Signals a Big Crypto Pivot

In a dramatic shift in investment patterns, South Korean retail investors withdrew $657 million from Tesla stock in August 2025, representing the largest monthly outflow in more than two years. At the same time, by mid-2025, they had shifted more than $12 billion into U.S.-listed companies tied to cryptocurrency, indicating a deepening preference for digital […]
Condividi
Tronweekly2025/09/18 14:00
MetaMask to Launch Its Token Sooner Than Expected, Says ConsenSys CEO

MetaMask to Launch Its Token Sooner Than Expected, Says ConsenSys CEO

The post MetaMask to Launch Its Token Sooner Than Expected, Says ConsenSys CEO appeared first on Coinpedia Fintech News MetaMask, the world’s leading Web3 wallet and gateway to decentralized apps, is gearing up to launch its own token. In a recent interview, Consensys CEO and Ethereum co-founder Joe Lubin revealed that a MetaMask token could be launched much earlier than people think, sparking excitement among users and investors who have long been waiting for …
Condividi
CoinPedia2025/09/19 12:56
How is the xStocks tokenized stock market developing?

How is the xStocks tokenized stock market developing?

Author: Heechang Compiled by: TechFlow xStocks offers a tokenized stock service, allowing investors to trade tokenized versions of popular US stocks like Tesla in real time. While still in its early stages, it’s already showing some interesting signs of growth. Observation 1: Trading is concentrated in Tesla (TSLA) As in many emerging markets, trading activity has quickly concentrated on a handful of stocks. Data shows a high concentration of trading volume in the most well-known and volatile stocks, with Tesla being the most prominent example. This concentration is not surprising: liquidity tends to accumulate in assets that retail investors already favor, and early adopters often use familiar high-beta stocks to test new infrastructure. Observation 2: Liquidity decreases on weekends Data shows that on-chain equity trading volume drops to 30% or less of weekday levels over the weekend. Unlike crypto-native assets, which trade seamlessly around the clock, tokenized stocks still inherit the behavioral inertia of traditional market trading hours. Traders appear less willing to trade when reference markets (such as Nasdaq and the New York Stock Exchange) are closed, likely due to concerns about arbitrage, price gaps, and the inability to hedge positions off-chain. Observation 3: Prices move in line with the Nasdaq Another key signal comes from pricing behavior during the initial launch period. Initially, xStocks tokens traded at a significant premium to their Nasdaq counterparts, reflecting market enthusiasm and potential friction in bridging fiat liquidity. However, these premiums gradually diminished over time. Current trading patterns show that the token price is at the upper limit of Tesla's intraday price range and is highly consistent with the Nasdaq reference price. Arbitrageurs appear to be maintaining this price discipline, but there are still small deviations from the intraday highs, indicating some market inefficiencies that may present opportunities and risks for active traders. New opportunities for Korean stock investors? South Korean investors currently hold over $100 billion in US stocks, with trading volume increasing 17-fold since January 2020. Existing infrastructure for South Korean investors to trade US stocks is limited by high fees, long settlement times, and slow cash-out processes, creating opportunities for tokenized or on-chain mirror stocks. As the infrastructure and platforms supporting on-chain US stock markets continue to improve, a new group of South Korean traders will enter the crypto market, which is undoubtedly a huge opportunity.
Condividi
PANews2025/09/18 08:00