![](/static/253f0d9b/assets/icons/icon-96x96.png)
![](https://lemmy.world/pictrs/image/c47230a8-134c-4dc9-89e8-75c6ea875d36.png)
Because they would have brownouts overnight and when the weather was bad.
Because they would have brownouts overnight and when the weather was bad.
Don’t give them any ideas
I always thought the Chinese Room argument was kinda silly. It’s predicated on the idea that humans have some unique capacity to understand the world that can’t be replicated by a syntactic system, but there is no attempt made to actually define this capacity.
The whole argument depends on our intuition that we think and know things in a way inanimate objects don’t. In other words, it’s a tautology to draw the conclusion that computers can’t think from the premise that computers can’t think.
What’s the thermal impact of a ram module? Don’t they use like 2 or 3 watts even in a desktop? Can’t be much…
Because ram is incredibly cheap and developer hours are incredibly expensive. I think it’s a bit silly too but there’s just no financial incentive for companies to care about memory usage when they know most consumer devices have tons of extra headroom.
*proceeds to wrap everything in unsafe {}
Sounds like a good way to get the feds interested in your otherwise not very notable property crime.
No, motion sickness.
I read the second half of this in Heath Ledger’s joker voice
We’ll that’s when I’m on social media…
Wait until you find out where Indiana University is
For a horrifying take on this check out this short story by qntm
I don’t understand why I would want a bunch of usb c ports? On a phone where there obviously isn’t space for a full sized port sure, but I find that fiddling with the one usb c port on the back of my desktop is a pain in the ass and the port really struggles to keep a good connection when attached to a stiff or heavy cable.
You’re never going to be able to formally prove anything as nebulous as “harm” full stop, so this isn’t a very convincing argument imo.
I’m skeptical that an LLM could answer questions as effectively just with documentation. A big part of the value in stack overflow and similar sites is that the answers provided come from people who have experience with a given technology and have some understanding of the pain points. Often times you can ask the wrong question and still get a useful answer because the context is enough for others to figure out what you might be confused by.
I’m not sure an LLM could do the same just given the docs, but it would be interesting to see how close it could get.
So…limewire?
Idk if that would be a good business decision. They would want it to be free and easy to start a channel still, so it would mean once your channel gets to a certain popularity google makes the deal progressively worse. This would create a big incentive for competition if all your biggest content creators are suddenly paying over cost to subsidize smaller channels.
Not that this would be a bad thing, but I don’t see why google would ever want to risk it.
Seems like a lot of stuff like that though. At this point I only use windows to play games and I want to interact with the OS as little as possible, so I don’t understand why I would want an updated UI with more ads and Microsoft integrations when it does nothing to improve what I actually use it for.
The mentioned but unsupported link to “general intelligence” reeks of bullshit to me. I don’t doubt a modified LLM (maybe an unmodified one as well) can beat lossless compression algorithms, but I doubt that’s very useful or impressive when you account for the model size and speed.
If you allow the model to be really huge in comparison to the input data it’s hard to prove you haven’t just memorized the training set.
The problem isn’t that the energy is too cheap, it’s that there’s too much of it, which is why it’s so cheap. An electrical grid can only support so much power and there is no cost effective way to store enough energy to run the grid for any appreciable amount of time, so it all must be used or else the system becomes unstable.