Not who you asked but you don’t want your AI to train itself based on the questions random users ask because it could introduce incorrect or offensive information. For this reason llms are usually trained and used in a separate step. If a user gave the llms private information you wouldn’t want it to learn that information and pass it on to other users so there are protections in place usually to stop it from learning new things while just processing requests.
These companies absolutely collect the prompt data and user session behavior. Who knows what kinda analytics they can use it for at any time in the future, even if it’s just assessing how happy the user was with the answers based on response. But having it detached from your person is good. Unless they can identify you based on metrics like time of day, speech patterns, etc
Prompt data is pointless and useless without a human to create a feedback loop for it, at which point it wouldn’t have context anyway. Also human effort to correct spelling dnd other user errors at the outset anyway. Hugely pointless and unreliable.
Not to mention, what good would it do for training? It wouldn’t help the model at all.
You can collect the data and figure out how to use it later. Just look at the Google leaks lately and what they collect, it’s literally everything down to the length of clicks and full walks through the site
Collecting data about user interests is in itself valuable, and it’s plausible to use various metrics to analyze it, something as simple as sentiment analysis, which has been broadly done. Sentiment analysis has predated modern ML by a long margin, but you can read the wiki page on that
But yeah just think about stuff like Google trends, tracking interest in topics, as an example of what such data could be used for. And deanonymizing the inputs is probably possible to some degree, aside from the obvious trust we place in DDG as a centralized failure point
You’re confusing analytics with direct input storage and reuse of prompt data to train somehow, as in your original comment.
Analytics has absolutely nothing to do with their model usage and training, and would pointless. Observing keywords and interests is standard analysis stuff. I don’t even think anyone even cares about it anymore.
I think they mean that a lot of careless people will give the AIs personally identifiable information or other sensitive information. Privacy and security are often breached due to human error, one way or another.
That’s true, but no way for us to know that these companies aren’t storing queries in plaintext on their end (although they would run out of space pretty fast if they did that)
But that’s a human error as you said, the only way to fix it is by using it correctly as an user. AI is a tool and it should be handled correctly like any other tool, be it a knife, a car, a password manager, a video recording program, a bank app or whatever.
I think a bigger issue here is that many people don’t care about their personal information as much as their lives.
Anonymous or not, you’re still feeding it data
Not how that works.
I’m curious, how does it work?
Not who you asked but you don’t want your AI to train itself based on the questions random users ask because it could introduce incorrect or offensive information. For this reason llms are usually trained and used in a separate step. If a user gave the llms private information you wouldn’t want it to learn that information and pass it on to other users so there are protections in place usually to stop it from learning new things while just processing requests.
These companies absolutely collect the prompt data and user session behavior. Who knows what kinda analytics they can use it for at any time in the future, even if it’s just assessing how happy the user was with the answers based on response. But having it detached from your person is good. Unless they can identify you based on metrics like time of day, speech patterns, etc
Prompt data is pointless and useless without a human to create a feedback loop for it, at which point it wouldn’t have context anyway. Also human effort to correct spelling dnd other user errors at the outset anyway. Hugely pointless and unreliable.
Not to mention, what good would it do for training? It wouldn’t help the model at all.
You can collect the data and figure out how to use it later. Just look at the Google leaks lately and what they collect, it’s literally everything down to the length of clicks and full walks through the site
Collecting data about user interests is in itself valuable, and it’s plausible to use various metrics to analyze it, something as simple as sentiment analysis, which has been broadly done. Sentiment analysis has predated modern ML by a long margin, but you can read the wiki page on that
But yeah just think about stuff like Google trends, tracking interest in topics, as an example of what such data could be used for. And deanonymizing the inputs is probably possible to some degree, aside from the obvious trust we place in DDG as a centralized failure point
You’re confusing analytics with direct input storage and reuse of prompt data to train somehow, as in your original comment.
Analytics has absolutely nothing to do with their model usage and training, and would pointless. Observing keywords and interests is standard analysis stuff. I don’t even think anyone even cares about it anymore.
Not really. Depending on the implementation.
It’s not like ddg is going to keep training their own version of llama or mistral
I think they mean that a lot of careless people will give the AIs personally identifiable information or other sensitive information. Privacy and security are often breached due to human error, one way or another.
But these open models don’t really take new input into their models at any point. They don’t normally do that type of inference training.
That’s true, but no way for us to know that these companies aren’t storing queries in plaintext on their end (although they would run out of space pretty fast if they did that)
It’s true. But I trust them more than closedai or Ms at least
But that’s a human error as you said, the only way to fix it is by using it correctly as an user. AI is a tool and it should be handled correctly like any other tool, be it a knife, a car, a password manager, a video recording program, a bank app or whatever.
I think a bigger issue here is that many people don’t care about their personal information as much as their lives.
https://duckduckgo.com/duckduckgo-help-pages/aichat/ai-chat-privacy/
https://simonwillison.net/2024/May/29/training-not-chatting/