A rising movement of artists and authors are suing tech companies for training AI on their work without credit or payment

  • fidodo@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    While I do think it’s technically possible and the right thing to do to determine what original works were used in derivative art to pay royalties, the reality of the situation is that those payments would be a small fraction of what they make now since the whole point of generative art is to be able to reproduce derivative works for a fraction of the cost. Unless the demand for art increases proportionately with the decreased cost, which it can’t, compensation with decrease, and as more art goes into the public domain the compensation will decrease further.

    This will not save artists and they need a back up plan. I think the future of art will probably be that artists will become more like art directors or designers who are responsible for having good taste, not producing the original work, but even that will have greatly diminishing demand as generative AI will handle most basic needs.

  • oracle33@lemmy.ml
    link
    fedilink
    English
    arrow-up
    0
    arrow-down
    1
    ·
    1 year ago

    While I recognize the AI art is quite obviously derivative and considering that ML pattern matching requires much more input, there’s argument that it’s more derivative, I really struggle to grasp how humans learning to be creative aren’t doing exactly the same thing and what makes that ok (except of course that’s ok).

    Maybe it’s just less obvious and auditable?

    • inspxtr@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      1 year ago

      I believe with humans, the limitations of our capacity to know, create, learn, and the limited contexts that we apply such knowledge and skills may actually be better for creativity and relatability - knowing everything may not always be optimal, especially when it is something about subjective experience. Plus, such limitations may also protect creators from certain claims about copyright, 1 idea can come from many independent creators, and can be implemented briefly similar or vastly different. And usually, we, as humans, develop a sense of work ethics to attribute the inspirations of our work. There are other who steal ideas without attribution as well, but that’s where laws come in to settle it.

      On the side of tech companies using their work to train, AI gen tech is learning at a vastly different scale, slurping up their work without attributing them. If we’re talking about the mechanism of creativity, AI gen tech seems to be given a huge advantage already. Plus, artists/creators learn and create their work, usually with some contexts, sometimes with meaning. Excluding commercial works, I’m not entirely sure the products AI gen tech creates carry such specificity. Maybe it does, with some interpretation?

      Anyway, I think the larger debate here is about compensation and attribution. How is it fair for big companies with a lot of money to take creators’ work, without (or minimal) paying/attributing them, while those companies then use these technologies to make more money?

      EDIT: replace AI with gen(erative) tech

  • Lil' Bobby Tables@programming.dev
    link
    fedilink
    English
    arrow-up
    0
    arrow-down
    1
    ·
    1 year ago

    Sorry if this comes off as offensive, but this isn’t new news, and I think we all need a reality check here. I’ll also be forward, the artifice of doubt cast on this makes me pretty angry and is a threat to every one of us.

    So, OK. Speaking as a thirty-year programmer, a neurobiology minor, and a hobby animator, you know what these objections sound like? They sound like somebody who just learned to program, with JavaScript, at the top of that initial peak. You know the one, when you feel like you can do anything and present as such. You’ve never had to worry about stack overflows or memory exceptions or, quite possibly, even fundamental networking utilities, but you got a web page to do something which is, in fact, legitimately cool, and you’re proud of yourself—as you friggin’ should be. However, you don’t know how much you don’t know, and are way further from the top than you suspect you are.

    Until you learn something like C, and really walk away from it with a sore ass and a sense of perspective, you don’t know what you don’t know. This isn’t because we all aren’t rooting for you, it’s just an expected rite of passage; when you really begin to learn. We’re totally rooting for you, we used to be you! And this is also true of visual art and programming; they have nothing to do with each other, and as far as I can see this case is open and shut, and we’re still mulling over the details.

    Artists don’t copy, they analyze. It’s difference between reading all of the answers on a Stack Exchange site, considering them, discussing them, putting them in the context of your own life, and applying them in an organized and personal fashion; versus simply copying the code verbatim and jamming things in until it arguably works, which is literally what an LLM neural network is designed to do. We’ve all seen this with extremely questionable code output by GPT-3 (and, yes, GPT-4, it still happens, just not as frequently).

    The art neural networks are the same—an artist charges for that inspiration, because it was a lot of physical and emotional toil for them. It was a lot of feedback and self-critique. “Prompt engineering” is neither art nor, if we’re honest with ourselves, engineering.

    I maintain that this is basically a slightly obfuscated recap of the Napster trials from twenty years ago, who, ironically, fell back on almost the same “fair use” defense. It’s going to falter just as hard, as there’s a massive difference between showing smaller, low-resolution images on a search page; and using meaningful elements of those images to produce brand new, and competing, works; which happen to be flawed, but look the same to someone who browses JPEGs on Google like a channel surfer.

    The last thing I’m going to say is that we need to stop describing LLMs as “AI”. “AI” doesn’t mean anything. It could refer to a neural network, machine learning, A* path finding, collective intelligence, and any number of other things, the colloquial definition being technology that “performs a function which was previously only possible for a human”, and come on, once upon a time you could describe a hammer that way. AI is not a formal industry term, it’s marketing flak. LLMs are effectively a database of connections browsed with the use of a neural network.

    You want to keep using Midjourney and Stable Diffusion? Great! Go for it. You want to use ChatGPT to help you understand some multivariate calculus? Go nuts, I do it all the time, most mathematicians are terrible at articulating. However, they should, without question, have to pay for the art that they used, or cease using it if the sale won’t be completed. Any other outcome is absolutely going to lead to an economic collapse.