• filister@lemmy.world
    link
    fedilink
    English
    arrow-up
    24
    arrow-down
    3
    ·
    6 months ago

    Just ask ChatGPT what it thinks for some non-existing product and it will start hallucinating.

    This is a known issue of LLMs and DL in general as their reasoning is a black box for scientists.

    • db0@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      24
      arrow-down
      3
      ·
      6 months ago

      It’s not that their reasoning is a black box. It’s that they do not have reasoning! They just guess what the next word in the sentence is likely to be.

        • kureta@lemmy.ml
          link
          fedilink
          English
          arrow-up
          2
          ·
          6 months ago

          it’s not even a little bit more complicated than that. They are literally trained to predict the next token given a series of previous tokens. The way that they do that is very complicated and the amount of data they are trained on is huge. That’s why they have to give correct information sometimes to sound plausible. Providing accurate information is literally a side effect of the actual thing they are trained to do.