A constitution for agentic AI is not just a safeguard; it’s the new gateway to participation in trusted markets and governance through verifiabilityA constitution for agentic AI is not just a safeguard; it’s the new gateway to participation in trusted markets and governance through verifiability

Agentic AI must learn to play by blockchain’s rules | Opinion

Disclosure: The views and opinions expressed here belong solely to the author and do not represent the views and opinions of crypto.news’ editorial.

Systems that can call tools up on demand, set goals, spend money, and alter their own prompts are already creeping out of sandboxes and into production — agentic AI, or artificial intelligence, included.

Summary
  • Governance through verifiability: As AI agents gain autonomy to spend, publish, and act, systems must enforce cryptographic provenance and auditability — turning AI accountability from guesswork into verifiable evidence.
  • Identity over anonymity: Agentic AI needs verifiable identities, not usernames. Using W3C Verifiable Credentials and smart account policies, agents can prove who they are, what they’re allowed to do, and maintain traceable accountability across platforms.
  • Signed inputs and outputs: Cryptographically signing every input, output, and action creates a transparent audit trail — transforming AI from a “black box” into a “glass box” where decisions are explainable, reproducible, and regulator-ready.

This shift completely overlooks the bargain that society made with AI during its origins, that outputs were suggestions while humans were on the hook. Now, agents act, flipping that onus and opening the door to a wide world of ethical complications. If an autonomous system can alter records, publish content, and move funds, it must learn to play by the rules, and it must (more vitally) leave a trail that stands the test of time so that it can be audited and disputed, if necessary. 

Governance by engineering is needed now more than ever in the modernity of agentic AI, and the market is beginning to see this. Autonomy becomes more about accumulating liabilities rather than optimizing processes with cryptographic provenance and rules to bind agentic AI. When a trade goes wrong or a deepfake spreads, post-mortem forensics cannot rely on Slack messages or screenshots. Provenance is key, and it has to be machine-verifiable from the moment inputs get captured through to the moment actions are taken.

Identities, not usernames

Handles or usernames are not enough; agents need to be given identities that can be proven with verifiable credentials. W3C Verifiable Credentials (VCs) 2.0 provides a standards-based way to bind attributes (like roles, permissions, attestations, etc.) to entities in a way that other machines can verify. 

Pair this verification with key management and policy in smart accounts, and soon enough, an agent can present exactly ‘who’ it is and ‘what’ it can do long before it executes a single action. In such a model, credentials become a trackable permission surface that follows the agent across chains and services, and ensures they play by their rules with accountability.

With frequent misattributions and license omissions above 70%, the messy provenance of more widely used AI datasets shows how fast non-verifiable AI crumbles under inspection. If the community can’t keep data straight for static training corpora, it can’t expect regulators to accept unlabeled, unverified agent actions in live environments. 

Signing inputs and outputs

Agents act on inputs, whether that be a quote, a file, or a photo, and when those inputs can be forged or stripped of context, safety collapses. The Coalition for Content Provenance and Authenticity (C2PA) standard moves media out of the realm of guesswork and into cryptographically signed content credentials. 

Once again, credentials win over usernames, as seen by the likes of Google integrating content credentials in search and Adobe launching a public web app to embed and inspect them. The momentum here is toward artifacts that carry their own chain of custody, so agents that ingest data and emit only credentialed media will be easier to trust (and to govern).

This method should be extended to more structured data and decisions, such as when an agent queries a service. In this scenario, the response should be signed, and what follows should be the agent’s decision being recorded, sealed, and time-stamped for verification. 

Without signed statements, post-mortems dissolve into finger-pointing and conjecture. With them, accountability becomes computable — every decision, action, and transition cryptographically tied to a verifiable identity and policy context. For agentic AI, this transforms post-incident analysis from subjective interpretation into reproducible evidence, where investigators can trace intent, sequence, and consequence with mathematical precision.

Establishing on-chain or permission-chained logs gives autonomous systems an audit spine — a verifiable trail of causality. Investigators gain the ability to replay behavior, counterparties can verify authenticity and non-repudiation, and regulators can query compliance dynamically instead of reactively. The “black box” becomes a glass box, where explainability and accountability converge in real time. Transparency shifts from a marketing claim to a measurable property of the system.

Providers capable of demonstrating lawful data sourcing, verifiable process integrity, and compliant agentic behavior will operate with lower friction and higher trust. They won’t face endless rounds of due diligence or arbitrary shutdowns. When an AI system can prove what it did, why it did it, and on whose authority, risk management evolves from policing to permissioning — and adoption accelerates.

This marks a new divide in AI ecosystems: verifiable agents that can lawfully interoperate across regulated networks, and opaque agents that cannot. A constitution for agentic AI — anchored in identity, signed inputs and outputs, and immutable, queryable logs — is not just a safeguard; it’s the new gateway to participation in trusted markets.

Agentic AI will only go where it can prove itself. Those who design for provability and integrity now will set the standard for the next generation of interoperable intelligence. Those who ignore that bar will face progressive exclusion—from networks, users, and future innovation itself.

Chris Anderson

Chris Anderson is the CEO of ByteNova AI, an emerging innovator in edge AI technology.

Piyasa Fırsatı
Sleepless AI Logosu
Sleepless AI Fiyatı(AI)
$0.03849
$0.03849$0.03849
+0.23%
USD
Sleepless AI (AI) Canlı Fiyat Grafiği
Sorumluluk Reddi: Bu sitede yeniden yayınlanan makaleler, halka açık platformlardan alınmıştır ve yalnızca bilgilendirme amaçlıdır. MEXC'nin görüşlerini yansıtmayabilir. Tüm hakları telif sahiplerine aittir. Herhangi bir içeriğin üçüncü taraf haklarını ihlal ettiğini düşünüyorsanız, kaldırılması için lütfen service@support.mexc.com ile iletişime geçin. MEXC, içeriğin doğruluğu, eksiksizliği veya güncelliği konusunda hiçbir garanti vermez ve sağlanan bilgilere dayalı olarak alınan herhangi bir eylemden sorumlu değildir. İçerik, finansal, yasal veya diğer profesyonel tavsiye niteliğinde değildir ve MEXC tarafından bir tavsiye veya onay olarak değerlendirilmemelidir.

Ayrıca Şunları da Beğenebilirsiniz

What Changes Is Blockchain Bringing to Digital Payments in 2026?

What Changes Is Blockchain Bringing to Digital Payments in 2026?

Online services begin to operate as payment ecosystems. Whole industries restructure how they interact with users by combining infrastructure under a single interface
Paylaş
Cryptodaily2025/12/23 00:39
UK Looks to US to Adopt More Crypto-Friendly Approach

UK Looks to US to Adopt More Crypto-Friendly Approach

The post UK Looks to US to Adopt More Crypto-Friendly Approach appeared on BitcoinEthereumNews.com. The UK and US are reportedly preparing to deepen cooperation on digital assets, with Britain looking to copy the Trump administration’s crypto-friendly stance in a bid to boost innovation.  UK Chancellor Rachel Reeves and US Treasury Secretary Scott Bessent discussed on Tuesday how the two nations could strengthen their coordination on crypto, the Financial Times reported on Tuesday, citing people familiar with the matter.  The discussions also involved representatives from crypto companies, including Coinbase, Circle Internet Group and Ripple, with executives from the Bank of America, Barclays and Citi also attending, according to the report. The agreement was made “last-minute” after crypto advocacy groups urged the UK government on Thursday to adopt a more open stance toward the industry, claiming its cautious approach to the sector has left the country lagging in innovation and policy.  Source: Rachel Reeves Deal to include stablecoins, look to unlock adoption Any deal between the countries is likely to include stablecoins, the Financial Times reported, an area of crypto that US President Donald Trump made a policy priority and in which his family has significant business interests. The Financial Times reported on Monday that UK crypto advocacy groups also slammed the Bank of England’s proposal to limit individual stablecoin holdings to between 10,000 British pounds ($13,650) and 20,000 pounds ($27,300), claiming it would be difficult and expensive to implement. UK banks appear to have slowed adoption too, with around 40% of 2,000 recently surveyed crypto investors saying that their banks had either blocked or delayed a payment to a crypto provider.  Many of these actions have been linked to concerns over volatility, fraud and scams. The UK has made some progress on crypto regulation recently, proposing a framework in May that would see crypto exchanges, dealers, and agents treated similarly to traditional finance firms, with…
Paylaş
BitcoinEthereumNews2025/09/18 02:21
Gold continues to hit new highs. How to invest in gold in the crypto market?

Gold continues to hit new highs. How to invest in gold in the crypto market?

As Bitcoin encounters a "value winter", real-world gold is recasting the iron curtain of value on the blockchain.
Paylaş
PANews2025/04/14 17:12