• Tja@programming.dev
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    1
    ·
    5 months ago

    For one thing, generative networks are incapable of reciting facts reliably

    Neither are humans, for what it’s worth…

    • jj4211@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      5 months ago

      It’s interesting, when you ask a LLM something that it doesn’t know, it will tend to just spew out words that sound like they make sense, but are wrong.

      So it’s much more useful to have a human that will admit that they don’t have a response for it. Or the human acts like the LLM spewing stupid stuff that sounds right and gets promoted instead.