• 0 Posts
  • 23 Comments
Joined 1 year ago
cake
Cake day: July 3rd, 2023

help-circle

  • Okay that’s fine, but when websites are effectively writing

    if user_agent_string != [chromium]
         break;
    

    It doesn’t really matter how good compatibility is. I’ve had websites go from nothing but a “Firefox is not supported, please use Chrome” splash screen to working just fine with Firefox by simply spoofing the user agent to Chrome. Maybe some feature was broken, but I was able to do what I needed. More often than not they just aren’t testing it and don’t want to support other browsers.

    The more insidious side of this is that websites will require and attempt to enforce Chrome as adblocking gets increasingly impossible on them, because it aligns with their interests. It’s so important for the future of the web that we resist this change, but I think it’s too late.

    The world wide web is quickly turning into the dark alley of the internet that nobody is willing to walk down.


  • Yeah this is a hard one to navigate and it’s the only thing I’ve ever found that challenges my philosophy on the freedom of information.

    The archive itself isn’t causing the abuse, but CSAM is a record of abuse and we restrict the distribution not because distribution or possession of it is inherently abusive, but because the creation of it was, and we don’t want to support an incentive structure for the creation of more abuse.

    i.e. we don’t want more pedos abusing more kids with the intention of archival/distribution. So the archive itself isn’t the abuse, but the incentive to archive could be.

    There’s also a lot of questions with CSAM in general that come up about the ethics of it in that I think we aren’t ready to think about. It’s a hard topic all around and nobody wants to seriously address it beyond virtue signalling about how bad it is.

    I could potentially see a scenario where the archival could be beneficial to society similar to the FBI hash libraries Apple uses to scan iCloud for CSAM. If we throw genAI at this stuff to learn about it, we may be able to identify locations, abusers and victims to track them down and save people. But it would necessitate the existence of the data to train on.

    I could also see potential for using CSAM itself for psychotherapy. Imagine a sci-fi future where pedos are effectively cured by using AI trained on CSAM to expose them to increasingly mature imagery, allowing their attraction to mature with it. We won’t really know if something like that is possible if we delete everything. It seems awfully short sighted to me to delete data no matter how perverse, because it could have legitimate positive applications that we haven’t conceived of yet. So to that end, I do hope some 3 letter agencies maintain their restricted archives of data for future applications that could benefit humanity.

    All said, I absolutely agree that the potential of creating incentives for abusers to abuse is a major issue with immutable archival, and it’s definitely something that we need to figure out, before such an archive actually exists. So thank you for the thought experiment.






  • It’s data.

    It’s never “owning” in the traditional sense, because data is not physical.

    When people say they own something, there’s an implication that it’s theirs until they decide to part with it. That is true for games bought without DRM. DRM free the closest you’ll ever get to ‘owning’ data, you possess that on your own local device and it can’t be taken away.

    You can lose the ability to download the game, sure. But that is an additional service, not the game itself. You have that data until you delete it. Same with GoG Galaxy. that’s an extra service.

    You’re arguing 2 or 3 different things. Ownership as a legal right, ownership as in possession, and a weird third thing where you seem to be confusing meta services with the ownership of the thing itself.










  • I use so many of steams features it’s unfathomable to use any other launcher or even pirate anything because steam is so streamlined. Cloud saves, automatic local file transfers instead of redundant downloads, family share to my friends PC so half the time when I visit she’ll have already downloaded and played my new games. When I get there they’re just ready to go. Remote desktop to make any tweaks on my PC or casual gaming over stream. Big picture mode so I can lay back with a controller and chill, no futzing with m+kb UI. Steam input means I can easily drop in and out with any controllers.

    I just got a steam deck and while I could install another app store on it, I’ve entirely stuck with steam just for the UX. I don’t want to fuck with extra launchers and touchscreen bs.

    I just played a coop Windows game on a Linux based portable PC on a 4K TV with a $24 USB hub for video out, using an Xbox and ps5 controllers over Bluetooth. This was completely seamless and controller navigated. Steam is insanely good.



  • We can certainly argue over what they’re designed to do, and I definitely agree that’s the goal of them. The reality though is that on some level it is impossible to separate assertions from the words that describe them. Language itself is designed to communicate ideas, you can’t really create language without also communicating ideas, otherwise every sentence from an LLM would just look like

    “Has Anyone Really Been Far Even as Decided to Use Even Go Want to do Look More Like”

    They will readily cite information that was fed to them. Sometimes it is on point, sometimes not. That starts to be a bit of an ethical discussion on whether it is okay for them to paraphrase information they were fed, and without citing it as a source of the info.

    In a perfect world we should be able to expand a whole learning tree to trace back how the model pieced together each word and point of data it is citing, kind of like an advanced Wikipedia article. Then you could take the typical synopsis that the model provides and dig into it to judge for yourself if it’s accurate or not. From a research standpoint I view info you collect from a language model as a step down from a secondary source and we should be able to easily see how it gets to that info.