The post CVE Allocation: Why AI Models Should Be Excluded appeared on BitcoinEthereumNews.com. James Ding Sep 26, 2025 19:58 Explore why Common Vulnerabilities and Exposures (CVE) should focus on frameworks and applications rather than AI models, according to NVIDIA’s insights. The Common Vulnerabilities and Exposures (CVE) system, a globally recognized standard for identifying security flaws in software, is under scrutiny concerning its application to AI models. According to NVIDIA, the CVE system should primarily focus on frameworks and applications rather than individual AI models. Understanding the CVE System The CVE system, maintained by MITRE and supported by CISA, assigns unique identifiers and descriptions to vulnerabilities, facilitating clear communication among developers, vendors, and security professionals. However, as AI models become integral to enterprise systems, the question arises: should CVEs also cover AI models? AI Models and Their Unique Challenges AI models introduce failure modes such as adversarial prompts, poisoned training data, and data leakage. These resemble vulnerabilities but do not align with the CVE definition, which focuses on weaknesses violating confidentiality, integrity, or availability guarantees. NVIDIA argues that the vulnerabilities typically reside in the frameworks and applications that utilize these models, not in the models themselves. Categories of Proposed AI Model CVEs Proposed CVEs for AI models generally fall into three categories: Application or framework vulnerabilities: Issues within the software that encapsulates or serves the model, such as insecure session handling. Supply chain issues: Risks like tampered weights or poisoned datasets, better managed by supply chain security tools. Statistical behaviors of models: Features such as data memorization or bias, which do not constitute vulnerabilities under the CVE framework. AI Models and CVE Criteria AI models, due to their probabilistic nature, exhibit behaviors that can be mistaken for vulnerabilities. However, these are often typical inference outcomes exploited in unsafe application contexts. For a CVE to be applicable,… The post CVE Allocation: Why AI Models Should Be Excluded appeared on BitcoinEthereumNews.com. James Ding Sep 26, 2025 19:58 Explore why Common Vulnerabilities and Exposures (CVE) should focus on frameworks and applications rather than AI models, according to NVIDIA’s insights. The Common Vulnerabilities and Exposures (CVE) system, a globally recognized standard for identifying security flaws in software, is under scrutiny concerning its application to AI models. According to NVIDIA, the CVE system should primarily focus on frameworks and applications rather than individual AI models. Understanding the CVE System The CVE system, maintained by MITRE and supported by CISA, assigns unique identifiers and descriptions to vulnerabilities, facilitating clear communication among developers, vendors, and security professionals. However, as AI models become integral to enterprise systems, the question arises: should CVEs also cover AI models? AI Models and Their Unique Challenges AI models introduce failure modes such as adversarial prompts, poisoned training data, and data leakage. These resemble vulnerabilities but do not align with the CVE definition, which focuses on weaknesses violating confidentiality, integrity, or availability guarantees. NVIDIA argues that the vulnerabilities typically reside in the frameworks and applications that utilize these models, not in the models themselves. Categories of Proposed AI Model CVEs Proposed CVEs for AI models generally fall into three categories: Application or framework vulnerabilities: Issues within the software that encapsulates or serves the model, such as insecure session handling. Supply chain issues: Risks like tampered weights or poisoned datasets, better managed by supply chain security tools. Statistical behaviors of models: Features such as data memorization or bias, which do not constitute vulnerabilities under the CVE framework. AI Models and CVE Criteria AI models, due to their probabilistic nature, exhibit behaviors that can be mistaken for vulnerabilities. However, these are often typical inference outcomes exploited in unsafe application contexts. For a CVE to be applicable,…

CVE Allocation: Why AI Models Should Be Excluded



James Ding
Sep 26, 2025 19:58

Explore why Common Vulnerabilities and Exposures (CVE) should focus on frameworks and applications rather than AI models, according to NVIDIA’s insights.





The Common Vulnerabilities and Exposures (CVE) system, a globally recognized standard for identifying security flaws in software, is under scrutiny concerning its application to AI models. According to NVIDIA, the CVE system should primarily focus on frameworks and applications rather than individual AI models.

Understanding the CVE System

The CVE system, maintained by MITRE and supported by CISA, assigns unique identifiers and descriptions to vulnerabilities, facilitating clear communication among developers, vendors, and security professionals. However, as AI models become integral to enterprise systems, the question arises: should CVEs also cover AI models?

AI Models and Their Unique Challenges

AI models introduce failure modes such as adversarial prompts, poisoned training data, and data leakage. These resemble vulnerabilities but do not align with the CVE definition, which focuses on weaknesses violating confidentiality, integrity, or availability guarantees. NVIDIA argues that the vulnerabilities typically reside in the frameworks and applications that utilize these models, not in the models themselves.

Categories of Proposed AI Model CVEs

Proposed CVEs for AI models generally fall into three categories:

  1. Application or framework vulnerabilities: Issues within the software that encapsulates or serves the model, such as insecure session handling.
  2. Supply chain issues: Risks like tampered weights or poisoned datasets, better managed by supply chain security tools.
  3. Statistical behaviors of models: Features such as data memorization or bias, which do not constitute vulnerabilities under the CVE framework.

AI Models and CVE Criteria

AI models, due to their probabilistic nature, exhibit behaviors that can be mistaken for vulnerabilities. However, these are often typical inference outcomes exploited in unsafe application contexts. For a CVE to be applicable, a model must fail its intended function in a way that breaches security, which is seldom the case.

The Role of Frameworks and Applications

Vulnerabilities often originate from the surrounding software environment rather than the model itself. For example, adversarial attacks manipulate inputs to produce misclassifications, a failure of the application to detect such queries, not the model. Similarly, issues like data leakage result from overfitting and require system-level mitigations.

When CVEs Might Apply to AI Models

One exception where CVEs could be relevant is when poisoned training data results in a backdoored model. In such cases, the model itself is compromised during training. However, even these scenarios might be better addressed through supply chain integrity measures.

Conclusion

Ultimately, NVIDIA advocates for applying CVEs to frameworks and applications where they can drive meaningful remediation. Enhancing supply chain assurance, access controls, and monitoring is crucial for AI security, rather than labeling every statistical anomaly in models as a vulnerability.

For further insights, you can visit the original source on NVIDIA’s blog.

Image source: Shutterstock


Source: https://blockchain.news/news/cve-allocation-why-ai-models-should-be-excluded

Market Opportunity
null Logo
null Price(null)
--
----
USD
null (null) Live Price Chart
Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact service@support.mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.

You May Also Like

Polygon Tops RWA Rankings With $1.1B in Tokenized Assets

Polygon Tops RWA Rankings With $1.1B in Tokenized Assets

The post Polygon Tops RWA Rankings With $1.1B in Tokenized Assets appeared on BitcoinEthereumNews.com. Key Notes A new report from Dune and RWA.xyz highlights Polygon’s role in the growing RWA sector. Polygon PoS currently holds $1.13 billion in RWA Total Value Locked (TVL) across 269 assets. The network holds a 62% market share of tokenized global bonds, driven by European money market funds. The Polygon POL $0.25 24h volatility: 1.4% Market cap: $2.64 B Vol. 24h: $106.17 M network is securing a significant position in the rapidly growing tokenization space, now holding over $1.13 billion in total value locked (TVL) from Real World Assets (RWAs). This development comes as the network continues to evolve, recently deploying its major “Rio” upgrade on the Amoy testnet to enhance future scaling capabilities. This information comes from a new joint report on the state of the RWA market published on Sept. 17 by blockchain analytics firm Dune and data platform RWA.xyz. The focus on RWAs is intensifying across the industry, coinciding with events like the ongoing Real-World Asset Summit in New York. Sandeep Nailwal, CEO of the Polygon Foundation, highlighted the findings via a post on X, noting that the TVL is spread across 269 assets and 2,900 holders on the Polygon PoS chain. The Dune and https://t.co/W6WSFlHoQF report on RWA is out and it shows that RWA is happening on Polygon. Here are a few highlights: – Leading in Global Bonds: Polygon holds 62% share of tokenized global bonds (driven by Spiko’s euro MMF and Cashlink euro issues) – Spiko U.S.… — Sandeep | CEO, Polygon Foundation (※,※) (@sandeepnailwal) September 17, 2025 Key Trends From the 2025 RWA Report The joint publication, titled “RWA REPORT 2025,” offers a comprehensive look into the tokenized asset landscape, which it states has grown 224% since the start of 2024. The report identifies several key trends driving this expansion. According to…
Share
BitcoinEthereumNews2025/09/18 00:40
transcosmos helping Chinese lingerie brand LING LINGERIE’s full-fledged entry into Japan

transcosmos helping Chinese lingerie brand LING LINGERIE’s full-fledged entry into Japan

Executing strategies to help LING LINGERIE, a Chinese brand meeting Gen Z needs, boost awareness TOKYO, Jan. 23, 2026 /PRNewswire/ — transcosmos today announced
Share
AI Journal2026/01/23 19:30
UBS Crypto Trading: The Bold Move That Could Reshape Private Banking in 2025

UBS Crypto Trading: The Bold Move That Could Reshape Private Banking in 2025

BitcoinWorld UBS Crypto Trading: The Bold Move That Could Reshape Private Banking in 2025 In a landmark development for the financial world, Swiss banking titan
Share
bitcoinworld2026/01/23 19:25