This one on lemmy.today, and my original account with the same username on lemmy.world.
Hello, I’m an archivist who does things.
E? E.
This one on lemmy.today, and my original account with the same username on lemmy.world.
People can also stop saying words and think for a second about the information they’re actually saying first, whereas an LLM just vomits up words that seem to match the pattern of the rest of the sentence. If I were to ask you what 2 + 2 is, you’d stop, run the math in your head, get 4, then reply with 4. An LLM would just start vomiting out words based on what it’s been trained on without verifying that the information is good (or even relevant), and can end up confidently telling you that 2 + 2 is in fact equal to the cube root of 5 because that’s what the data said so it has to be right, for instance.
I’m aware this is a drastic oversimplification, and I think the tech is neat (although I avoid non-self-hosted models like the plague due to privacy concerns), but it’s oversold to all hell, and is definitely not even close to intelligent.
I… I…
I have no words.
How someone could genuinely believe that is beyond me.
I am fully aware, I speak nerd and computer.
The computers speak back. It’s a good time.
I might be going insane?
I’m also ripping off being inspired by another comment.
Poe’s law strikes again?
English, C++; Z80, 6502, and 45GS02 assembly, some SQL, VHDL, a bit of Python and Verilog, BASIC65, bash, CP/M ED, and a few other odds and ends
https://alliancespaceguard.com, pretty much daily. Can’t wait for it to release.