OpenAI Patches ChatGPT Data Exfiltration Flaw and Codex GitHub Token Vulnerability

Why it matters: The patched flaw prevented unauthorized leakage of sensitive user conversation data from OpenAI's ChatGPT.
- Check Point discovered a previously unknown vulnerability in OpenAI ChatGPT that enabled sensitive conversation data exfiltration.
- A single malicious prompt could transform a regular conversation into a covert channel for data leakage.
- OpenAI has since patched this critical flaw, along with a separate vulnerability affecting Codex GitHub tokens.
Check Point researchers uncovered a critical data exfiltration flaw in OpenAI's ChatGPT, allowing malicious prompts to covertly leak sensitive conversation data. This vulnerability, now patched, could have turned ordinary chats into unauthorized data channels, raising significant privacy concerns for users.




