By Erika Fille T. Legara IN A PREVIOUS BusinessWorld article, I argued that AI governance goes beyond overseeing a handful of technology projects and now encompassesBy Erika Fille T. Legara IN A PREVIOUS BusinessWorld article, I argued that AI governance goes beyond overseeing a handful of technology projects and now encompasses

What boards should demand from AI: assessment, audit, and assurance

2026/03/24 00:03
7 min di lettura
Per feedback o dubbi su questo contenuto, contattateci all'indirizzo crypto.news@mexc.com.

By Erika Fille T. Legara

IN A PREVIOUS BusinessWorld article, I argued that AI governance goes beyond overseeing a handful of technology projects and now encompasses ensuring that AI-enabled decisions across the organization remain aligned with strategy, risk appetite, and ethical standards. A natural follow-on question for boards is: beyond setting expectations, how does an organization verify that its AI systems are actually performing as intended, responsibly, and within defined boundaries?

The answer lies in three related but distinct disciplines: AI risk assessment, AI audit, and AI assurance. Boards familiar with financial oversight will find the logic intuitive. The challenge, and the opportunity, is applying that same discipline to AI.

3 DISTINCT BUT RELATED CONCEPTS
It helps to be precise about what each term means, because they are often used interchangeably when they should not be.

AI risk assessment is the internal process by which an organization identifies, evaluates, and prioritizes the risks associated with its AI systems. It asks what could go wrong, how likely it is, and what the impact would be. This is the foundation on which everything else rests. Without a credible risk assessment, neither audit nor assurance has a meaningful baseline to work from. Material AI systems exist across every sector: a credit scoring model in a bank, a patient triage tool in a hospital, a student performance predictor in a university, a case prioritization system in a government agency. What they share is the consequence, which includes outputs affecting real people in meaningful ways.

For any such system, risk assessment should be systematic, documented, and revisited regularly as the model evolves and as the operating environment changes.

AI audit is the independent examination of whether an AI system, or the governance framework surrounding it, conforms to defined standards, policies, or requirements. It is an evidence-based process conducted by a party sufficiently independent of those responsible for the system under review. An AI audit might assess whether an organization’s AI management practices conform to an internationally recognized standard, such as ISO/IEC 42001, the world’s first AI management system standard published in 2023, or whether a specific model is performing within approved parameters and without unintended bias. Importantly, the standard governing auditors themselves, ISO/IEC 42006, published in July 2025, now sets out the competence and rigor required of bodies that audit and certify AI management systems. The auditing profession, in other words, is beginning to formalize its own accountability for AI engagements.

AI assurance is the formal, stakeholder-facing conclusion that emerges from that audit process. It is the professional opinion, issued by a qualified and independent party, that gives boards, regulators, investors, and the public confidence that an AI system or AI management framework meets a defined standard. Assurance is what transforms an internal review into a credible external signal.

GROUNDING AI ASSURANCE
The concept of independent assurance is not new to boards. Every year, external auditors examine an organization’s financial statements and issue an opinion; a conclusion grounded in evidence, conducted under internationally recognized standards, and underpinned by the auditor’s professional independence. That opinion carries weight precisely because the framework governing it is rigorous and well-established. This logic applies regardless of industry; whether the organization is a bank, a hospital, a conglomerate, or a public institution, the financial audit is a familiar and trusted mechanism.

The same logic now applies to AI. When an organization makes a public or regulatory claim about its AI systems, that they are fair, transparent, compliant with a defined standard, or free from material bias, the question is: who independently validates that claim, and under what professional framework?

The answer, for the accounting and audit profession, is ISAE 3000, the International Standard on Assurance Engagements issued by the International Auditing and Assurance Standards Board (IAASB). ISAE 3000 governs assurance engagements on matters other than historical financial information, making it the natural home for AI assurance. Under this standard, a professional can conduct either a reasonable assurance engagement, the higher standard analogous to a financial audit, or a limited assurance engagement, which is closer in depth to a review. The choice of level matters and should be deliberate, calibrated to the materiality and risk of the AI system in question.

A close contemporary parallel is sustainability or ESG assurance. Many Philippine-listed companies are already commissioning independent assurance on their sustainability disclosures, often under ISAE 3000. The mechanics are exactly the same: an independent practitioner examines a set of claims against defined criteria and issues a formal conclusion. The subject matter differs; the professional discipline does not.

WHAT THIS MEANS FOR BOARDS
Three practical implications follow from this framework.

First, boards should ask whether their organizations have conducted rigorous AI risk assessments on material systems. Not a one-time exercise, but a living process that is updated as models are retrained, use cases expand, and the regulatory environment evolves. The quality of downstream audit and assurance work is only as good as the risk assessment that precedes it.

Second, boards should distinguish between internal and external AI audit. Internal audit functions play a critical role in providing assurance that AI controls operate as designed. However, boards should also consider whether an independent, third-party audit of material AI systems is warranted, particularly for systems that affect customers, employees, or the public in consequential ways. As with financial auditing, independence strengthens credibility.

Third, as organizations increasingly make public commitments about their AI practices to regulators, investors, and the communities they serve, boards should ask whether those commitments are backed by credible assurance. Assertions without independent validation are, at best, a reputational risk waiting to materialize.

A PROFESSION STILL BUILDING ITS CAPABILITIES
It would be incomplete to present this landscape without acknowledging its current limitations. The infrastructure for AI assurance is still being built. Professional standards are emerging. Auditor competencies in AI, spanning machine learning, algorithmic bias, data governance, and model transparency, are not yet uniformly developed across the profession. ISAE 3000 provides the assurance framework, but the AI-specific methodologies that sit within it are still maturing.

For organizations not yet ready to pursue formal assurance, this is not a reason to stand still. A structured, regular assessment of material AI systems is a meaningful and practical first step. It builds the internal discipline, documentation, and governance habits that assurance-readiness eventually requires. Boards that commission such assessments today, even informally, are developing institutional muscle that will matter when regulatory expectations harden and stakeholder scrutiny intensifies.

This view is one I have explored more deeply in research I have been developing with colleagues examining generative AI governance in economies where regulation has yet to catch up with technology. The central argument is that firms are already moral agents with existing ethical obligations to their stakeholders; waiting for bespoke AI legislation is neither necessary nor sufficient for responsible governance. The obligation to act is already there. What is needed is the organizational will to operationalize it.

This is not a reason for boards to wait on the broader agenda. It is a reason to ask informed questions now, of their external auditors, their internal audit functions, and their management teams, so that when the profession’s capabilities catch up with the demand, their organizations are ready to engage meaningfully.

The financial audit did not emerge fully formed. It took decades of standard-setting, professional development, and hard lessons from corporate failures for the independent audit to become the credible institution it is today. AI assurance is at a comparable early inflection point. Boards that engage with it now, ask sharper questions of their auditors, demand more than management assertions, and build internal capabilities before regulators require them to do so, will not only reduce their own exposure. They will help shape what responsible AI accountability looks like for Philippine organizations and the broader region.

Erika Fille T. Legara is a physicist, educator, and data science and AI practitioner working across government, academia, and industry. She is the inaugural managing director and chief AI and data officer of the Education Center for AI Research, and an associate professor and Aboitiz chair in Data Science at the Asian Institute of Management, where she founded and led the country’s first MSc in Data Science program from 2017 to 2024. She serves on corporate boards, is a fellow of the Institute of Corporate Directors, an IAPP Certified AI Governance Professional, and a co-founder of CorteX Innovations.

Disclaimer: gli articoli ripubblicati su questo sito provengono da piattaforme pubbliche e sono forniti esclusivamente a scopo informativo. Non riflettono necessariamente le opinioni di MEXC. Tutti i diritti rimangono agli autori originali. Se ritieni che un contenuto violi i diritti di terze parti, contatta crypto.news@mexc.com per la rimozione. MEXC non fornisce alcuna garanzia in merito all'accuratezza, completezza o tempestività del contenuto e non è responsabile per eventuali azioni intraprese sulla base delle informazioni fornite. Il contenuto non costituisce consulenza finanziaria, legale o professionale di altro tipo, né deve essere considerato una raccomandazione o un'approvazione da parte di MEXC.

Potrebbe anche piacerti

Coinbase Slams ‘Patchwork’ State Crypto Laws, Calls for Federal Preemption

Coinbase Slams ‘Patchwork’ State Crypto Laws, Calls for Federal Preemption

The post Coinbase Slams ‘Patchwork’ State Crypto Laws, Calls for Federal Preemption appeared on BitcoinEthereumNews.com. In brief Coinbase has filed a letter with the DOJ urging federal preemption of state crypto laws, citing Oregon’s securities suit, New York’s ETH stance, and staking bans. Chief Legal Officer Paul Grewal called state actions “government run amok,” warning that patchwork enforcement “slows innovation and harms consumers.” A legal expert told Decrypt that states risk violating interstate commerce rules and due process, and DOJ support for preemption may mark a potential turning point. Coinbase has gone on the offensive against state regulators, petitioning the Department of Justice that a patchwork of lawsuits and licensing schemes is tearing America’s crypto market apart. “When Oregon can sue us for services that are legal under federal law, something’s broken,” Chief Legal Officer Paul Grewal tweeted on Tuesday. “This isn’t federalism—this is government run amok.” When Oregon can sue us for services that are legal under federal law, something’s broken. This isn’t federalism–this is government run amok. We just sent a letter to @TheJusticeDept urging federal action on crypto market structure to remedy this. 1/3 — paulgrewal.eth (@iampaulgrewal) September 16, 2025 Coinbase’s filing says that states are “expansively interpreting their securities laws in ways that undermine federal law” and violate the dormant Commerce Clause by projecting regulatory preferences beyond state borders. “The current patchwork of state laws isn’t just inefficient – it slows innovation and harms consumers” and demands “federal action on crypto market structure,” Grewal said.  States vs. Coinbase It pointed to Oregon’s securities lawsuit against the exchange, New York’s bid to classify Ethereum as a security, and cease-and-desist orders on staking as proof that rogue states are trying to resurrect the SEC’s discredited “regulation by enforcement” playbook. Oregon Attorney General Dan Rayfield sued Coinbase in April for promoting unregistered securities, and in July asked a federal judge to return the…
Condividi
BitcoinEthereumNews2025/09/18 11:52
Time Management For Entrepreneurs

Time Management For Entrepreneurs

When you’re managing everything on your own, time is your biggest asset. Yet while most entrepreneurs focus on leadership, growth and networking, they often overlook
Condividi
Techbullion2026/03/24 20:21
Vitalik Buterin lays out new Ethereum roadmap at EDCON

Vitalik Buterin lays out new Ethereum roadmap at EDCON

The post Vitalik Buterin lays out new Ethereum roadmap at EDCON appeared on BitcoinEthereumNews.com. At EDCON 2025 in Osaka, Ethereum co-founder Vitalik Buterin delivered fresh details of Ethereum’s technical roadmap, delineating both short-term scaling goals and longer-term protocol transformations. The immediate priority, according to slides from the presentation, is scaling at the L1 level by raising the gas limit while maintaining decentralization. Tools such as block-level access lists, ZK-EVMs, gas repricing, and slot optimization were highlighted as means to improve throughput and efficiency. A central theme of the presentation was privacy, divided into protections for on-chain “writes” (transactions, voting, DeFi operations) and “reads” (retrieving blockchain state). Write privacy could be achieved through client-side zero-knowledge proofs, encrypted voting, and mixnet-based transaction relays. Read privacy efforts include trusted execution environments, private information retrieval techniques, dummy queries to obscure access patterns, and partial state nodes that reveal only necessary data. These measures aim to reduce information leakage across both ends of user interaction. In the medium term, Ethereum’s focus shifts to cross-Layer-2 interoperability. Vitalik described trustless L2 asset transfers, proof aggregation, and faster settlement mechanisms as key milestones toward a seamless rollup ecosystem. Faster slots and stronger finality, supported by techniques like erasure coding and three-stage finalization (3SF), are also in scope to enhance responsiveness and security. The roadmap also includes Stage 2 rollup advancements to strengthen verification efficiency, alongside a call for broader community participation to help build and maintain these improvements. The long-term “Lean Ethereum” blueprint emphasizes security, simplicity and optimization, with ambitions for quantum-resistant cryptography, formal verification of the protocol, and adoption of ideal primitives for hashing, signatures, and zero-knowledge proofs. Buterin stressed that these improvements are not just for scalability but to make Ethereum a stable, trustworthy foundation for the broader decentralized ecosystem. This is a developing story. This article was generated with the assistance of AI and reviewed by editor Jeffrey Albus before publication.…
Condividi
BitcoinEthereumNews2025/09/18 03:22