Is it that or is it that the laws are selectively applied on little guys and ignored once you make enough money? It certainly looks that way. Once you’ve achieved a level of “fuck you money” it doesn’t matter how unscrupulously you got there. I’m not sure letting the big guys get away with it while little guys still get fucked over is as big of a win as you think it is?
Examples:
The Pirate Bay: Only made enough money to run the site and keep the admins living a middle class lifestyle.
VERDICT: Bad, wrong, and evil. Must be put in jail.
OpenAI: Claims to be non-profit, then spins off for-profit wing. Makes a mint in a deal with Microsoft.
VERDICT: Only the goodest of good people and we must allow them to continue doing so.
The IP laws are stupid but letting fucking rich twats get away with it while regular people will still get fucked by the same rules is kind of a fucking stupid ass hill to die on.
But sure, if we allow the giant companies to do it, SOMEHOW the same rules will “trickle down” to regular people. I think I’ve heard that story before… No, they only make exceptions for people who can basically print money. They’ll still fuck you and me six ways to Sunday for the same.
But yeah, somehow, the same rules will end up being applied to us? My ass. They’re literally jailing people for it right now. If that wasn’t the case, maybe this argument would have legs.
Yeah, I’m not a fan of AI but I’m generally of the view that anything posted on the internet, visible without a login, is fair game for indexing a search engine, snapshotting a backup (like the internet archive’s Wayback Machine), or running user extensions on (including ad blockers). Is training an AI model all that different?
Yes, it kind of is. A search engine just looks for keywords and links, and that’s all it retains after crawling a site. It’s not producing any derivative works, it’s merely looking up an index of keywords to find matches.
An LLM can essentially reproduce a work, and the whole point is to generate derivative works. So by its very nature, it runs into copyright issues. Whether a particular generated result violates copyright depends on the license of the works it’s based on and how much of those works it uses. So it’s complicated, but there’s very much a copyright argument there.
An LLM can essentially reproduce a work, and the whole point is to generate derivative works. So by its very nature, it runs into copyright issues.
Derivative works are not copyright infringement. If LLMs are spitting out exact copies, or near-enough-to-exact copies, that’s one thing. But as you said, the whole point is to generate derivative works.
“Copying is theft” is the argument of corporations for ages, but if they want our data and information, to integrate into their business, then, suddenly they have the rights to it.
If copying is not theft, then we have the rights to copy their software and AI models, as well, since it is available on the open web.
You realize that half of Lemmy is tying themselves in inconsistent logical knots trying to escape the reverse conundrum?
Copying isn’t stealing and never was. Our IP system that artificially restricts information has never made sense in the digital age, and yet now everyone is on here cheering copyright on.
If copying is not theft, then we have the rights to copy their software
No we don’t, copying copyrighted material is copyright infringement. Which is illegal. that does not make it theft though.
Oversimplifying the issue makes for an uninformed debate.
copying is not theft
Didnt you hear? We stan draconian IP laws now because AI bad.
Is it that or is it that the laws are selectively applied on little guys and ignored once you make enough money? It certainly looks that way. Once you’ve achieved a level of “fuck you money” it doesn’t matter how unscrupulously you got there. I’m not sure letting the big guys get away with it while little guys still get fucked over is as big of a win as you think it is?
Examples:
The Pirate Bay: Only made enough money to run the site and keep the admins living a middle class lifestyle.
VERDICT: Bad, wrong, and evil. Must be put in jail.
OpenAI: Claims to be non-profit, then spins off for-profit wing. Makes a mint in a deal with Microsoft.
VERDICT: Only the goodest of good people and we must allow them to continue doing so.
The IP laws are stupid but letting fucking rich twats get away with it while regular people will still get fucked by the same rules is kind of a fucking stupid ass hill to die on.
But sure, if we allow the giant companies to do it, SOMEHOW the same rules will “trickle down” to regular people. I think I’ve heard that story before… No, they only make exceptions for people who can basically print money. They’ll still fuck you and me six ways to Sunday for the same.
I mean, the guys who ran Jetflicks, a pirate streaming site, are being hit with potentially 48 year sentences. Longer than a lot of way more serious fucking crimes. I’ve literally seen murderers get half that.
But yeah, somehow, the same rules will end up being applied to us? My ass. They’re literally jailing people for it right now. If that wasn’t the case, maybe this argument would have legs.
But AI companies? Totes okay, bro.
Yeah, I’m not a fan of AI but I’m generally of the view that anything posted on the internet, visible without a login, is fair game for indexing a search engine, snapshotting a backup (like the internet archive’s Wayback Machine), or running user extensions on (including ad blockers). Is training an AI model all that different?
Yes, it kind of is. A search engine just looks for keywords and links, and that’s all it retains after crawling a site. It’s not producing any derivative works, it’s merely looking up an index of keywords to find matches.
An LLM can essentially reproduce a work, and the whole point is to generate derivative works. So by its very nature, it runs into copyright issues. Whether a particular generated result violates copyright depends on the license of the works it’s based on and how much of those works it uses. So it’s complicated, but there’s very much a copyright argument there.
Derivative works are not copyright infringement. If LLMs are spitting out exact copies, or near-enough-to-exact copies, that’s one thing. But as you said, the whole point is to generate derivative works.
“Copying is theft” is the argument of corporations for ages, but if they want our data and information, to integrate into their business, then, suddenly they have the rights to it.
If copying is not theft, then we have the rights to copy their software and AI models, as well, since it is available on the open web.
They got themselves into quite a contradiction.
You realize that half of Lemmy is tying themselves in inconsistent logical knots trying to escape the reverse conundrum?
Copying isn’t stealing and never was. Our IP system that artificially restricts information has never made sense in the digital age, and yet now everyone is on here cheering copyright on.
No we don’t, copying copyrighted material is copyright infringement. Which is illegal. that does not make it theft though.
Oversimplifying the issue makes for an uninformed debate.
any content you produce is automatically copyrighted