• Armok: God of Blood@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    22
    arrow-down
    9
    ·
    1 year ago

    I disagree. I think that there should be zero regulation of the datasets as long as the produced content is noticeably derivative, in the same way that humans can produce derivative works using other tools.

    • Hello Hotel@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      2
      ·
      edit-2
      1 year ago

      Good in theory, Problem is if your bot is given too mutch exposure to a specific piece of media and when the “creativity” value that adds random noise (and for some setups forces it to improvise) is too low, you get whatever impression the content made on the AI, like an imperfect photocopy (non expert, explained “memorization”). Too high and you get random noise.

      • Armok: God of Blood@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        if your bot is given too mutch exposure to a specific piece of media and when the “creativity” value that adds random noise (and for some setups forces it to improvise) is too low, you get whatever impression the content made on the AI, like an imperfect photocopy

        Then it’s a cheap copy, not noticeably derivative, and whoever is hosting the trained bot should probably take it down.

        Too high and you get random noise.

        Then the bot is trash. Legal and non-infringing, but trash.

        There is a happy medium where SD, MJ, and many other text-to-image generators currently exist. You can prompt in such a way (or exploit other vulnerabilities) to create “imperfect photocopies,” but you can also create cheap, infringing works with any number of digital and physical tools.

    • adrian783@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      5
      ·
      1 year ago

      LLM are not human, the process to train LLM is not human-like, LLM don’t have human needs or desires, or rights for that matter.

      comparing it to humans has been a flawed analogy since day 1.

      • King@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        1
        ·
        1 year ago

        Llm no desires = no derivative works? Let llm handle your comments they will make more sense