• VonCesaw@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    10
    ·
    1 year ago

    Human experience considers context, experience, and relation to previous works

    ‘AI’ has the words verbatim in it’s database and will occasionally spit them out verbatim

    • 📛Maven@lemmy.sdf.org
      link
      fedilink
      English
      arrow-up
      5
      ·
      1 year ago

      It doesn’t. The original data is nowhere in its dataset. Words are nowhere in its dataset. It stores how often certain tokens (numbers computationally equivalent to language fragments; not even words, but just a few letters or punctuation, often chunks of words) are found together in sentences written by humans, and uses that to generate human-sounding sentences. The sentences it returns are thereby a massaged average of what it predicts a human would say in that situation.

      If you say “It was the best of times,” and it returns “it was the worst of times.”, it’s not because “it was the best of times, it was the worst of times.” is literally in its dataset, it’s because after converting what you said to tokens, its dataset shows that the latter almost always follows the former. From the AI’s perspective, it’s like you said the token string (03)(153)(3181)(359)(939)(3)(10)(108), and it found that the most common response to that by far is (03)(153)(3181)(359)(61013)(12)(10)(108).

      • Hello Hotel@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        2
        ·
        edit-2
        1 year ago

        Impressioning and memorization, it memorised the impression (“sensation”) of what it’s like to have the text in the buffer: “It was the best of times,” and “instinctively” outputs it’s impression “it was the worst of times.” Knowing each letter it added was the most “correct” rewarding.