The post Researcher uncovers a critical SSRF vulnerability in ChatGPT’s Custom GPT appeared on BitcoinEthereumNews.com. OpenAI’s large language model ChatGPT fixed a security flaw found earlier this week by a researcher within the “Actions” feature of Custom GPTs. Attackers could have exploited a Server-Side Request Forgery (SSRF) bug to expose internal credentials within the AI model’s cloud, the investigator claimed. As an Open Security Engineer and bug hunter, SirLeeroyJenkins was creating his first Custom GPT, and he “sensed” there was an SSRF vulnerability. The Actions feature enables users to define external APIs using OpenAPI schemas for the AI to call them for specific tasks, such as fetching weather data. While testing his own API, SirLeeroyJenkins discovered the system returned data from a user-provided URL. Alarmed by this behavior, he conducted more tests, suspecting a potential SSRF issue. “Once I realized this feature could return data from any user-provided URL, the hacker instinct kicked in,” he said. “I had to check for SSRF.” SSRF vulnerability could make custom GPTs unsafe  As explained by Jenkins in his Medium post published earlier this week, Server-Side Request Forgery is a web vulnerability that tricks applications into making requests to unintended destinations. If the application does not properly validate user-supplied URLs, attackers can use the server’s access privileges to reach internal networks or cloud metadata services. Basic full-read SSRF chart. Source: SirLeeroyJenkins Medium blog. SSRF was prevalent enough to make the OWASP Top 10 list in 2021 and has now expanded its potential damage because insecure default configurations in cloud environments can expose critical systems. Jenkins explained that there are two main SSRF types, namely full-read and blind. Full-read SSRF returns data from the target service directly to the attacker. At the same time, blind SSRF does not reveal the response but still allows them to interact with internal services, for example, through timing-based port scanning. He tested the vulnerability… The post Researcher uncovers a critical SSRF vulnerability in ChatGPT’s Custom GPT appeared on BitcoinEthereumNews.com. OpenAI’s large language model ChatGPT fixed a security flaw found earlier this week by a researcher within the “Actions” feature of Custom GPTs. Attackers could have exploited a Server-Side Request Forgery (SSRF) bug to expose internal credentials within the AI model’s cloud, the investigator claimed. As an Open Security Engineer and bug hunter, SirLeeroyJenkins was creating his first Custom GPT, and he “sensed” there was an SSRF vulnerability. The Actions feature enables users to define external APIs using OpenAPI schemas for the AI to call them for specific tasks, such as fetching weather data. While testing his own API, SirLeeroyJenkins discovered the system returned data from a user-provided URL. Alarmed by this behavior, he conducted more tests, suspecting a potential SSRF issue. “Once I realized this feature could return data from any user-provided URL, the hacker instinct kicked in,” he said. “I had to check for SSRF.” SSRF vulnerability could make custom GPTs unsafe  As explained by Jenkins in his Medium post published earlier this week, Server-Side Request Forgery is a web vulnerability that tricks applications into making requests to unintended destinations. If the application does not properly validate user-supplied URLs, attackers can use the server’s access privileges to reach internal networks or cloud metadata services. Basic full-read SSRF chart. Source: SirLeeroyJenkins Medium blog. SSRF was prevalent enough to make the OWASP Top 10 list in 2021 and has now expanded its potential damage because insecure default configurations in cloud environments can expose critical systems. Jenkins explained that there are two main SSRF types, namely full-read and blind. Full-read SSRF returns data from the target service directly to the attacker. At the same time, blind SSRF does not reveal the response but still allows them to interact with internal services, for example, through timing-based port scanning. He tested the vulnerability…

Researcher uncovers a critical SSRF vulnerability in ChatGPT’s Custom GPT

2025/11/13 18:42
4 min di lettura
Per feedback o dubbi su questo contenuto, contattateci all'indirizzo crypto.news@mexc.com.

OpenAI’s large language model ChatGPT fixed a security flaw found earlier this week by a researcher within the “Actions” feature of Custom GPTs. Attackers could have exploited a Server-Side Request Forgery (SSRF) bug to expose internal credentials within the AI model’s cloud, the investigator claimed.

As an Open Security Engineer and bug hunter, SirLeeroyJenkins was creating his first Custom GPT, and he “sensed” there was an SSRF vulnerability. The Actions feature enables users to define external APIs using OpenAPI schemas for the AI to call them for specific tasks, such as fetching weather data.

While testing his own API, SirLeeroyJenkins discovered the system returned data from a user-provided URL. Alarmed by this behavior, he conducted more tests, suspecting a potential SSRF issue.

“Once I realized this feature could return data from any user-provided URL, the hacker instinct kicked in,” he said. “I had to check for SSRF.”

SSRF vulnerability could make custom GPTs unsafe 

As explained by Jenkins in his Medium post published earlier this week, Server-Side Request Forgery is a web vulnerability that tricks applications into making requests to unintended destinations. If the application does not properly validate user-supplied URLs, attackers can use the server’s access privileges to reach internal networks or cloud metadata services.

Basic full-read SSRF chart. Source: SirLeeroyJenkins Medium blog.

SSRF was prevalent enough to make the OWASP Top 10 list in 2021 and has now expanded its potential damage because insecure default configurations in cloud environments can expose critical systems.

Jenkins explained that there are two main SSRF types, namely full-read and blind. Full-read SSRF returns data from the target service directly to the attacker. At the same time, blind SSRF does not reveal the response but still allows them to interact with internal services, for example, through timing-based port scanning.

He tested the vulnerability by pointing the API URL to Azure’s Instance Metadata Service (IMDS), which stores sensitive cloud credentials. Access to this service normally requires the Meta True header, so he was alarmed when his initial attempts could not provide the header as requested.

The Custom GPT feature initially blocked the exploit because it enforced HTTPS URLs, while Azure IMDS operates over HTTP. Using a 302 redirect from an external HTTPS endpoint to the internal metadata URL, the server followed the redirect. However, Azure blocked access without the required header.

“Since the server followed 302 redirects, it returned the response from their internal metadata URL. Mission accomplished, right? Wrong. The response from their metadata service indicated that a required header was not being set,” SirLeeroyJenkins denoted.

After continuing to probe the responses, the feature allowed custom API keys that could be named arbitrarily. He attempted to name a key Metadata with the value true, where the required header was injected to grant the GPT access to the metadata service.

Jenkins promptly reported the vulnerability to OpenAI’s Bugcrowd program, and the issue was assigned high severity and then patched.

He also mentioned that Open Security previously used this type of SSRF attack chain to exploit a vulnerable invoice generation feature at a major global financial firm for security auditing.

OpenAI releases GPT-5.1 after the version 5.0 turmoil

In other related ChatGPT news, OpenAI announced the launch of GPT-5.1, boasting of several updates made from version 5.0 to improve instruction following and adaptive reasoning. 

“GPT-5.1 is out! It’s a nice upgrade. I particularly like the improvements in instruction following, and the adaptive thinking. The intelligence and style improvements are good too,” wrote CEO Sam Altman on X late Wednesday.

Tech writer Mehul Gupta tested GPT-5.1 against its predecessor, noting that GPT-5, while polished and helpful, sometimes overcomplicates simple tasks. GPT-5.1’s instant version supposedly had an improved understanding and subtle adaptive pauses that gave more “context-aware” responses.

In one test, Gupta asked both models to reply in six words. GPT-5 attempted to overexplain, while GPT-5.1 delivered a concise and correct answer. 

Altman also announced 7 new presets, including Default, Friendly, Efficient, Professional, Candid, or Quirky, have been added, but users can choose to “tune it themselves.”

Don’t just read crypto news. Understand it. Subscribe to our newsletter. It’s free.

Source: https://www.cryptopolitan.com/chatgpt-hacked-exploiting-ssrf-vulnerability/

Opportunità di mercato
Logo Sleepless AI
Valore Sleepless AI (SLEEPLESSAI)
$0.01947
$0.01947$0.01947
+9.62%
USD
Grafico dei prezzi in tempo reale di Sleepless AI (SLEEPLESSAI)
Disclaimer: gli articoli ripubblicati su questo sito provengono da piattaforme pubbliche e sono forniti esclusivamente a scopo informativo. Non riflettono necessariamente le opinioni di MEXC. Tutti i diritti rimangono agli autori originali. Se ritieni che un contenuto violi i diritti di terze parti, contatta crypto.news@mexc.com per la rimozione. MEXC non fornisce alcuna garanzia in merito all'accuratezza, completezza o tempestività del contenuto e non è responsabile per eventuali azioni intraprese sulla base delle informazioni fornite. Il contenuto non costituisce consulenza finanziaria, legale o professionale di altro tipo, né deve essere considerato una raccomandazione o un'approvazione da parte di MEXC.

$30,000 in PRL + 15,000 USDT

$30,000 in PRL + 15,000 USDT$30,000 in PRL + 15,000 USDT

Deposit & trade PRL to boost your rewards!