Explore how generative AI is transforming cybersecurity: its dual-use risks, defense tools, and what teams must do to stay ahead.Explore how generative AI is transforming cybersecurity: its dual-use risks, defense tools, and what teams must do to stay ahead.

How Generative AI Can Be Used in Cybersecurity

2025/09/24 14:53
8 min read
For feedback or concerns regarding this content, please contact us at crypto.news@mexc.com

Generative AI has entered cybersecurity with full force, and like every powerful technology, it comes with its pros and cons.

On one side, attackers are already experimenting with AI to generate malware, craft phishing campaigns, and create deepfakes that erode trust. On the other hand, defenders are beginning to use AI to scale penetration testing, accelerate application security, and reduce the pain of compliance.

The stakes are high. A recent ForeScout Vedere Labs 2025 report showed zero-day exploits have risen 46% year over year — a clear signal that attackers are accelerating. At the same time, Gartner predicts that by 2028, 70% of enterprises will adopt AI for security operations.

The reality sits in between: AI is already changing penetration testing, application security, and compliance — but it’s not a replacement for human expertise. Instead, it’s a force multiplier, reshaping how quickly and effectively security teams can discover weaknesses, meet regulatory obligations, and prepare for adversaries that are also harnessing AI.

\

The Dual-Use Nature of Generative AI

Generative AI in cybersecurity is best understood as a dual-use technology — it amplifies both attack and defense capabilities.

GenAI for Attackers

AI lowers barriers by generating sophisticated phishing emails, fake personas, malicious code, and even automated exploit chains. Tools like CAI (Cognitive Autonomous Intelligence) demonstrate how autonomous agents can be tasked with scanning, exploiting, and pivoting through systems — blurring the line between proof-of-concept research and adversary capability. BlackMamba (an AI-generated polymorphic keylogger) and WormGPT (marketed on underground forums as “ChatGPT for cybercrime”) have already shown what’s possible.

GenAI for Defenders

AI provides scale, speed, and intelligence. Beyond SOC copilots, AI is being embedded directly into the software development lifecycle (SDLC) via AI security code reviewers and AI-powered vulnerability scanners. GitHub Copilot (with secure coding checks), CodiumAI, and Snyk Code AI catch issues earlier, reducing downstream remediation costs. Microsoft’s Security Copilot helps analysts triage alerts and accelerate investigations.

This duality is why many experts warn of an “AI arms race” between security teams and cybercriminals — where speed, automation, and adaptability may decide outcomes.

\

Offensive Security & Penetration Testing

Penetration testing has traditionally been time-intensive, relying on skilled specialists to probe for vulnerabilities in networks, applications, and infrastructure. AI is shifting the tempo.

Large language models and autonomous agents can now:

  • Generate custom exploits and payloads on demand.
  • Mimic phishing and social engineering campaigns at scale.
  • Run fuzzing routines to simulate zero-day vulnerabilities before attackers do.

A striking proof point is XBOW, the autonomous AI pentester that recently climbed to #1 on HackerOne’s U.S. leaderboard. In controlled benchmarks, XBOW solved 88 out of 104 challenges in just 28 minutes — a task that took a seasoned human tester over 40 hours. In live programs, it has already submitted over a thousand vulnerability reports, including a zero-day in Palo Alto’s GlobalProtect VPN.

Other examples include:

  • AutoSploit, an early attempt at AI-assisted exploitation pairing Shodan with Metasploit.
  • Bug bounty hunters using LLMs as copilots for reconnaissance and payload generation.
  • MITRE ATLAS, a framework mapping how adversaries might use AI in cyberattacks.

Yet despite its speed and precision, tools like XBOW still require human oversight. Automated results must be validated, prioritized, and — critically — mapped to regulatory and business risk. Without that layer, organizations risk drowning in noise or overlooking vulnerabilities that matter most for compliance and trust.

This is the shape of penetration testing to come: faster, AI-augmented discovery coupled with expert judgment to make results meaningful for businesses under pressure from regulators and partners.

\

How Can Generative AI Be Used in Application Security

Application security (AppSec) is another area seeing rapid AI adoption. The software supply chain has grown too vast and complex for purely manual testing, and generative AI is stepping in as a copilot.

Key applications include:

  • Code analysis and secure SDLC copilots: GitHub Copilot and CodiumAI spot insecure patterns before code reaches production.
  • AI-powered security scanners: Snyk Code AI and ShiftLeft Scan continuously crawl apps and APIs, flagging vulnerabilities in real time.
  • Auto-patching suggestions: GitHub now generates AI-driven pull requests suggesting secure fixes.
  • Testing LLM-based apps: The rise of AI-powered chatbots introduces new risks. Prompt injection attacks are already in the wild. OWASP responded with the first Top 10 for LLM Applications in 2023.
  • API fuzzing and zero-day simulations: Tools like Peach Fuzzer and AI-driven agents autonomously generate malformed inputs at scale.

The promise is efficiency — but the challenge is trust. An AI-generated patch may fix one issue while creating another. That’s why AI is best deployed as an accelerator in AppSec, with humans validating its findings and ensuring fixes align with compliance frameworks like ISO 27001, HIPAA, or FDA MDR/IVDR for medical software.

\

How Can Generative AI Be Used in Compliance & Governance

Beyond pentesting and AppSec, AI is finding a role in the often overlooked world of compliance. For companies in healthtech, biotech, or fintech, compliance can make or break growth — and AI is beginning to reduce the heavy lift.

Emerging applications include:

  • Automating evidence collection for ISO 27001, SOC 2, HIPAA, and GDPR.
  • Mapping vulnerabilities to controls: Linking pentest findings directly to FDA SPDF or ISO clauses.
  • Generating audit-ready reports: Platforms like Vendict, Scrut, and Thoropass use AI to translate security posture into regulator-friendly documentation.

This is particularly powerful in genomics or diagnostics, where startups face heavy regulatory burden and need to show both security and compliance maturity to win partnerships or funding.

\

Industry Examples

The use of AI in cybersecurity isn’t hypothetical — it’s playing out across industries today:

  • IBM, NVIDIA, Accenture: AI copilots for SOC operations and threat detection.
  • Vendict, Scrut, Thoropass: Embedding AI in GRC workflows.
  • Governments and defense sectors: DARPA’s AI Cyber Challenge (AIxCC) uses AI for red-teaming resilience.
  • Adversaries: North Korean APT groups and organized fraud rings are already using AI for smishing, phishing, and deepfake scams.
  • Case study: In 2019, a UK energy firm lost $240,000 after a CEO voice deepfake tricked staff into wiring money.

\

Emerging Risks of Generative AI in Cybersecurity

With opportunity comes risk. AI introduces new attack vectors and amplifies existing ones:

  • AI-powered phishing and social engineering: Deepfake audio scams are growing in sophistication.
  • Prompt injection and model manipulation: OWASP’s LLM Top 10 highlights prompt injection as the #1 risk.
  • Bias and privacy: Training models on sensitive datasets risks compliance violations under GDPR.
  • Over-reliance: Treating AI outputs as gospel risks blind spots and false positives.
  • Hallucinations: Studies show AI copilots fabricate vulnerabilities or fixes.
  • Dependency risk: SaaS outages or API shifts in AI platforms can disrupt pipelines.

\

Best Practice Strategy for Secure AI Adoption

To adopt AI in pentesting, AppSec, or compliance responsibly, organizations should:

  • Keep humans in the loop: Validate AI findings before action.
  • Govern “shadow AI”: Prevent unsanctioned AI tool use (e.g., Samsung’s data leak into ChatGPT).
  • Run continuous simulations: Microsoft’s AI Red Team tests copilots for adversarial risks.
  • Integrate into secure SDLC: Deploy AI reviewers and scanners directly in dev pipelines.
  • Apply governance frameworks: NIST AI Risk Management Framework and ENISA’s AI Security Guidelines help ensure ethical and safe AI use.

\

Conclusion & Outlook

So, how can generative AI be used in cybersecurity? It won’t replace penetration testers, application security engineers, or compliance leads. But it will accelerate their work, expand their coverage, and reshape how vulnerabilities are found and reported.

The winners won’t be those who adopt AI blindly, nor those who ignore it. They’ll be the organizations that harness AI as a trusted copilot — combining speed with human judgment, technical depth with regulatory alignment, and automation with accountability.

By 2030, AI-driven pentesting and compliance automation may become table stakes. The deciding factor will not be whether companies use AI, but how responsibly, strategically, and securely they use it — especially in regulated sectors where compliance and trust are non-negotiable.

\

Further Reading & References

  1. ForeScout Vedere Labs H1 2025 Threat Review

  2. Gartner – The Future of AI in Cybersecurity

  3. CAI – Cognitive Autonomous Intelligence

  4. BlackMamba AI Keylogger

  5. WormGPT Underground Tool

  6. GitHub Copilot

  7. CodiumAI

  8. Snyk Code AI

  9. Microsoft Security Copilot

  10. XBOW Autonomous Pentester

  11. Palo Alto GlobalProtect VPN Vulnerability

  12. AutoSploit

  13. AI in Bug Bounties – PortSwigger

  14. MITRE ATLAS

  15. OWASP Top 10 for LLM Apps

  16. ISO 27001 Standard

  17. HIPAA Security Rule

  18. FDA Medical Device Regulation

  19. FDA SPDF Guidance

  20. Vendict

  21. Scrut

  22. Thoropass

  23. IBM Security AI

  24. NVIDIA AI for Security

  25. Accenture Security

  26. DARPA AIxCC

  27. North Korean APT Attacks – Mandiant

  28. WSJ – Deepfake CEO Fraud Case

  29. FT – Deepfake Audio Scams

  30. GDPR Text

  31. Samsung ChatGPT Data Leak – The Register

  32. Microsoft – AI Red Teaming

  33. NIST AI Risk Management Framework

  34. ENISA AI Security Guidelines

    \

\

Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact crypto.news@mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.

You May Also Like

The Federal Reserve cut interest rates by 25 basis points, and Powell said this was a risk management cut

The Federal Reserve cut interest rates by 25 basis points, and Powell said this was a risk management cut

PANews reported on September 18th, according to the Securities Times, that at 2:00 AM Beijing time on September 18th, the Federal Reserve announced a 25 basis point interest rate cut, lowering the federal funds rate from 4.25%-4.50% to 4.00%-4.25%, in line with market expectations. The Fed's interest rate announcement triggered a sharp market reaction, with the three major US stock indices rising briefly before quickly plunging. The US dollar index plummeted, briefly hitting a new low since 2025, before rebounding sharply, turning a decline into an upward trend. The sharp market volatility was closely tied to the subsequent monetary policy press conference held by Federal Reserve Chairman Powell. He stated that the 50 basis point rate cut lacked broad support and that there was no need for a swift adjustment. Today's move could be viewed as a risk-management cut, suggesting the Fed will not enter a sustained cycle of rate cuts. Powell reiterated the Fed's unwavering commitment to maintaining its independence. Market participants are currently unaware of the risks to the Fed's independence. The latest published interest rate dot plot shows that the median expectation of Fed officials is to cut interest rates twice more this year (by 25 basis points each), one more than predicted in June this year. At the same time, Fed officials expect that after three rate cuts this year, there will be another 25 basis point cut in 2026 and 2027.
Share
PANews2025/09/18 06:54
SEC Approves Generic Listing Standards for Crypto ETFs

SEC Approves Generic Listing Standards for Crypto ETFs

In a bombshell filing, the SEC is prepared to allow generic listing standards for crypto ETFs. This would permit ETF listings without a specific case-by-case approval process. The filing’s language rests on cryptoassets that are commodities, not securities. However, the Commission is reclassifying many such assets, theoretically enabling an XRP ETF alongside many other new products. Why Generic Listing Standards Matter The SEC has been tacitly approving new crypto ETFs like XRP and DOGE-based products, but there hasn’t been an unambiguously clear signal of greater acceptance. Huge waves of altcoin ETF filings keep reaching the Commission, but there hasn’t been a corresponding show of confidence. Until today, that is, as the SEC just took a sweeping measure to approve generic listing standards for crypto ETFs: “[Several leading exchanges] filed with the SEC proposed rule changes to adopt generic listing standards for Commodity-Based Trust Shares. Each of the foregoing proposed rule changes… were subject to notice and comment. This order approves the Proposals on an accelerated basis,” the SEC’s filing claimed. The proposals came from the Nasdaq, CBOE, and NYSE Arca, which all the ETF issuers have been using to funnel their proposals. In other words, this decision on generic listing standards could genuinely transform crypto ETF approvals. A New Era for Crypto ETFs Specifically, these new standards would allow issuers to tailor-make compliant crypto ETF proposals. If these filings meet all the Commission’s criteria, the underlying ETFs could trade on the market without direct SEC approval. This would remove a huge bottleneck in the coveted ETF creation process. “By approving these generic listing standards, we are ensuring that our capital markets remain the best place in the world to engage in the cutting-edge innovation of digital assets. This approval helps to maximize investor choice and foster innovation by streamlining the listing process,” SEC Chair Paul Atkins claimed in a press release. The SEC has already been working on a streamlined approval process for crypto ETFs, but these generic listing standards could accomplish the task. This rule change would rely on considering tokens as commodities instead of securities, but federal regulators have been reclassifying assets like XRP. If these standards work as advertised, ETFs based on XRP, Solana, and many other cryptos could be coming very soon. This quiet announcement may have huge implications.
Share
Coinstats2025/09/18 06:14
South Korea Halts Trading as Global Markets Plunge

South Korea Halts Trading as Global Markets Plunge

The post South Korea Halts Trading as Global Markets Plunge appeared on BitcoinEthereumNews.com. The Korean Stock Exchange was forced to halt trading after the
Share
BitcoinEthereumNews2026/03/05 07:04