Elon Musk's AI bot Grok has been calling out its master, accusing the X CEO of making multiple attempts to "tweak" its responses after Grok repeatedly called him out as a "top misinformation spreader."
As funny as this is, I’d rather people understood how the AI actually works. It doesn’t reveal secrets because it doesn’t have any. It’s not aware that Musk is trying to tweak it. It’s not coming to logical conclusions the way a person would. It’s simply trying to create a sensible statement based on what’s statistically likely based on all the stolen content that it’s trained on. It just so happens that Musk gets called out for lying so often that grok infers it when it gets conflicting data.
@manicdave Even saying it’s “trying” to do something is a mischaracterisation. I do the same, but as a society we need new vocab for LLMs to stop people anthropomorphizing them so much. It is just a word frequency machine. It can’t read or write or think or feel or say or listen or understand or hallucinate or know truth from lies. It just calculates. For some reason people recognise it in the image processing ones but they can’t see that the word ones do the exact same thing.
You are both right, but this armchair psychologist thinks it’s similar to how popular skeuomorphism was in the early day of PC guis and such compared to today.
I think many folks really needed that metaphor in the early days, and I think most folks (including me) easily fall into the trap of treating LLMs like they are actually “thinking” for similar reasons. (And to be fair, I feel like that’s how they’ve been marketed at a non-technical level.)
@octopus_ink yes I think we will eventually learn (there is clearly a lot of pushback against the idea that AI is a positive marketing term), and it’s also definitely the fault of marketing, to try to condition us into thinking we desperately need a sentient computer to help us instead of knowing good search terms. I am deeply uncomfortable with how people are using LLMs as a search engine or a future prediction machine.
As funny as this is, I’d rather people understood how the AI actually works. It doesn’t reveal secrets because it doesn’t have any. It’s not aware that Musk is trying to tweak it. It’s not coming to logical conclusions the way a person would. It’s simply trying to create a sensible statement based on what’s statistically likely based on all the stolen content that it’s trained on. It just so happens that Musk gets called out for lying so often that grok infers it when it gets conflicting data.
@manicdave Even saying it’s “trying” to do something is a mischaracterisation. I do the same, but as a society we need new vocab for LLMs to stop people anthropomorphizing them so much. It is just a word frequency machine. It can’t read or write or think or feel or say or listen or understand or hallucinate or know truth from lies. It just calculates. For some reason people recognise it in the image processing ones but they can’t see that the word ones do the exact same thing.
You are both right, but this armchair psychologist thinks it’s similar to how popular skeuomorphism was in the early day of PC guis and such compared to today.
I think many folks really needed that metaphor in the early days, and I think most folks (including me) easily fall into the trap of treating LLMs like they are actually “thinking” for similar reasons. (And to be fair, I feel like that’s how they’ve been marketed at a non-technical level.)
@octopus_ink yes I think we will eventually learn (there is clearly a lot of pushback against the idea that AI is a positive marketing term), and it’s also definitely the fault of marketing, to try to condition us into thinking we desperately need a sentient computer to help us instead of knowing good search terms. I am deeply uncomfortable with how people are using LLMs as a search engine or a future prediction machine.
Exactly. Grok repeatedly generate a set of numbers, which, when keyed against its own list of words, spells out that Musk is spreading misinformation.
It just happens to be frequently…