• 0 Posts
  • 92 Comments
Joined 1 year ago
cake
Cake day: July 25th, 2023

help-circle










  • Exactly. USB is designed so that you can have multiple devices attached to one port. 7 slots on the PC is plenty.

    And in fact, they probably already have a hub. I can’t remember the last monitor that I bought that didn’t have a couple USB ports on it. Put that thing to use. Webcam, USB headset/mic, keyboard and mouse can all run perfectly well off a monitor hub as can most other accessories. Save the direct ports on the mobo for things that need the bandwidth like storage devices.



  • These are some very pretty words that express ideas without much self-reflection on why the ideas might be bad.

    I mean, I suppose you did say it yourself that you can’t trust the US government… but why would you trust ANY government? You know why I trust Google more than any government? I understand Google’s motivations ($$$). Put something into the hands of government and suddenly that thing is burdened by the desires of every politician and their special interest financiers.

    “Place it in the hands of something like the UN” would mean some international body I assume. Comprised of and led by whom exactly? And also, who would fund the thing? You suggest nationalization, so… taxpayers? Sure, here’s your $99/year Degooglebase access fee tax I guess? And beyond just making sure there’s enough money to keep the lights on, we need to make sure there’s enough money to pay creators. After all, YouTube isn’t just a library. It’s an economy larger than some countries and there would be consequences to destabilizing that economy. People aren’t just posting content for the love of the shared experience.

    Please don’t take what I’m saying here to be a defense of Google. Google is a shitty company for so many reasons. But advocating for nationalization of YouTube is just a horrifically bad idea in such manner as it was presented.

    But - all is not lost. First: for the creators you enjoy - find ways to support them other than Google. Make it possible for them to continue when YouTube stops being lucrative enough.

    Second: find, use, and advocate for the use of alternative services. There is no single site that is going to be able to replace YouTube. It simply isn’t going to happen unless PornHub wants to step up to the game and create their own SFW site YouTube-killer. They have the infrastructure and capacity to host and share absolutely massive amounts of video and have the business capabilities to accept income and pass it on to creators on a large scale. But that’s an entirely different discussion.

    Best to look at things differently. Like the Fediverse and the internet itself, it might be better off if the platform were distributed.



  • Just getting back around to this.

    My main reasoning is simply that authors and artists should be fairly credited and compensated for their work. If I create something and share it on the internet, I don’t necessarily want a company to make money on that thing, especially if they’re making money to my exclusion.

    So while I belive that IP as we know it today is probably not be the best way to handle things, I still think creators should have some say over how their works are used and should receive some reasonable share when their works are used for profit. Without creators, those works wouldn’t exist in the first place.

    Are there other jobs where it would be okay to take a person’s services without paying them? What would motivate people to continue providing those services?


  • Prompting for a source wouldn’t satisfy me until I could trust that the AI wasn’t hallucinating. After all, if GPT can make up facts about things like legal precedent or well documented events, why would I trust that its citations are legitimate?

    And if the suggestion is that the person asking for the information double check the cited sources, maybe that’s reasonable to request, but it somewhat defeats the original purpose.

    Bing might be doing things differently though, so you might be right in your assessment on that front. I haven’t played with their AI yet.



  • Your argument poses an interesting thought. Do machines have a right to fair use?

    Humans can consume for the sake of enjoyment. Humans can consume without a specific purpose of compiling and delivering that information. Humans can do all this without having a specific goal of monetary gain. Software created by a for-profit privately held company is inherently created to consume data with the explicit purpose of generating monetary value. If that is the specific intent and design then all contributors should be compensated.

    Then again, we can look no further than Google (the search engine, not the company) for an example that’s a closely related to the current situation. Google can host excerpts of data from billions of websites and serve that data up upon request without compensating those site owners in any way. I would argue that Google is different though because it literally cites every single source. A search result isn’t useful if we don’t know what site the result came from.

    And my final thought - are works that AI generates is truly transformative? I can see arguments that go either way.


  • Let me ask you this: when have you ever seen ChatGPT cite its sources and give appropriate credit to the original author?

    If I were to just read the NYT and make money by simply summarizing articles and posting those summaries on my own website without adding anything to it like my own commentary and without giving credit to the author, that would rightfully be considered plagiarism.

    This is a really interesting conundrum though. I would argue that AI isn’t capable of original thought the way that humans are and therefore AI creators must provide due compensation to the authors and artists whose data they used.

    AI is only giving back some amalgamation of words and concepts that it has been trained on. You might say that humans do the same, but that isn’t exactly true. The human brain is a funny thing. It can forget, it can misremember. It can manipulate. It can exaggerate. It can plan. It can have irrational or emotional responses. AI can’t really do those things on its own. It’s just mimicking human behavior at best.

    Most importantly to me though, AI is not capable of spontaneous thought. It is only capable of providing information that it has been trained on and only when prompted.


  • It’s honestly difficult for me to say because there are so many different ways to train AI. It really depends more on what the trainers configure to be a data point. Volume of files vs size of a single file aren’t as important as what the AI believes is a data point and how the data points are weighted.

    Just as a simple example, a data point may be considered a row on a spreadsheet without regard for how that data was split up across files. So ten files with 5 rows each might have the same weight as one file with 50 rows. But there’s also a penalty concept in some models, so the trainer can set it so that data that all comes from one file may be penalized. Or the opposite could be true if data coming from the same file is deemed to be more important in some way.

    In terms of how AIs make their decisions, that can also vary. But generally speaking, if 1000 pieces of data are used that are all similar in some way and one of them is somewhat different from the others, it is less likely that that one-off data will be used. It’s much more likely to have an effect If 100 of the 1000 pieces of data have that same information. There’s always the possibility of using that 1/1000 data, it’s just less likely to have a noticeable effect.

    AIs build confidence in responses based on how much a concept is reinforced, so you’d have to know something about the training algorithm to be able to intentionally impact the results.