• 3 Posts
  • 488 Comments
Joined 7 months ago
cake
Cake day: December 18th, 2023

help-circle












  • The background is that French law requires ISPs to retain the IPs of their customer for some time. That way, an IP address can be associated with a customer.

    If I download music in a Starbucks, can they fine the Starbucks CEO then?

    A CEO is an employee. You generally can’t sue employees for this sort of thing. It may be possible to sue the company as a whole for enabling the copyright infringement, but that’s not to do with this case. Perhaps in the future, operators of WiFi-hotspots will be required to use something like Youtube’s Content ID system.

    Anyway I hope I hope online artists, and authors are able to use this to sue AI companies for stealing their copyrighted works.

    They can use this to go after “pirates”. It’s got nothing to do with AI.







  • I’m sure it works fine in the lab. But it really only targets one specific AI model; that one specific Stable Diffusion VAE. I know that there are variants of that VAE around, which may or may not be enough to make it moot. The “Glaze” on an image may not survive common transformations, such as rescaling the image. It certainly will not survive intentional efforts to remove it, such as appropriate smoothing.

    In my opinion, there is no point in bothering in the first place. There are literally billions of images on the net. One locks up gems because they are rare. This is like locking up pebbles on the beach. It doesn’t matter if the lock is bad.

    Saw a post on Bluesky from someone in tech saying that eventually, if it’s human-viewable it’ll also be computer-viewable, and there’s simply no working around that, wonder if you agree on that or not.

    Sort of. The VAE, the compression, means that the image generation takes less compute; ie cheaper hardware and less energy. You can have an image generator that works on the same pixels, visible to humans. Actually, that’s simpler and existed earlier.

    By Moore’s law, it would be many years, even decades, before that efficiency gain is something we can do without. But I think, maybe, this becomes moot once special accelerator chips for neural nets are designed.

    What makes it obsolete is the proliferation of open models. EG Today Stable Diffusion 3 becomes available for download. This attack targets 1 specific model and may work on variants of it. But as more and more rather different models become available, the whole thing becomes increasingly pointless. Maybe you could target more than one, but it would be more and more effort for less and less effect.