![](/static/253f0d9b/assets/icons/icon-96x96.png)
![](https://fry.gs/pictrs/image/c6832070-8625-4688-b9e5-5d519541e092.png)
Fair. Powertoys is really extensive. I quite like Pop (or gnome’s? Not sure) tiling window manager though.
Fair. Powertoys is really extensive. I quite like Pop (or gnome’s? Not sure) tiling window manager though.
PopOS’s COSMIC menu is like that I think (you can search files, the web, even stuff like turning volume up and down)? But I’ve never tried to run it outside of PopOS.
For the screenshot you might want to use a terminal that doesn’t have bloom, a CRT filter, and a background, I genuinely can’t see the TUI.
Lol I didn’t get the reference before
(There was a post about Switzerland considering legalizing cocaine cus they have so much and it’s so pure & common, apparently)
Uh. Buddy. They absolutely are known for building a shitload of trains. There’s the Gottard, which is the longest tunnel through a mountain, and I think also the steepest railtracks in the world?
You’ve never heard of swiss trains always being on time?
This is a really solid explanation of how studies finding human behavior in LLMs don’t mean much; humans project meaning.
Neural networks are named like that because they’re based on a model of neurons from the 50s, which was then adapted further to work better with computers (so it doesn’t resemble the model much anymore anyway). A more accurate term is Multi-Layer Perceptron.
We now know this model is… effectively completely wrong.
Additionally, the main part (or glue, really) of LLMs is not even an MLP, but a “self-attention” layer. You can’t say LLMs work like a brain, because they don’t. The rest is debatable but it’s important to remember that there are billions of dollars of value in selling the dream of conscious AI.
Nah. Programming is… really hard to automate, and machine learning more so. The actual programming for it is pretty straightforward, but to make anything useful you need to get training data, clean it, and design a structure, which is much too general for an LLM.
Yeah, but the bridge is correctly over the river and the buildings aren’t really merged. Tough though.
The second one got me tho
Sure, it’s not proof, but it gives a good starting point. Non-overfitted images would still have this effect (to a lesser extent), and this would never happen to a human. And it’s not like the prompts were the image labels, the model just decided to use the stock image as a template (obvious in the case with the painting).
Personally, I have no issue with models made from stuff obtained with explicit consent. Otherwise you’re just exploiting labor without consent.
(Also if you’re just making random images for yourself, w/e)
((Also also, text models are a separate debate and imo much worse considering they’re literally misinformation generators))
Note: if anybody wants to reply with “actually AI models learn like people so it’s fine”, please don’t. No they don’t. Bugger off. https://arxiv.org/pdf/2212.03860.pdf here have a source.
… which is maybe why things that are essentially critical to a developed country’s lifestyle probably shouldn’t simply be companies. If we go off of “it’s not profitable”, public transport wouldn’t be any good, postal services would suck, etc.
The internet should be a public service like mail.
Also, in the US they paid the ISPs to hook everyone up to fiber, and then they just… didn’t.
Uh no people definitely did. Mostly the people that actually knew how this shit worked. But even laypeople complained when it was just Dall-E and Midjourney.
If you think something being legal automatically makes it not-wrong, I don’t trust you on… well, much of anything, but especially privacy
Mhm, but with the way LLMs work, it’s not possible to actually remove bias since it’s baked into the training data. Any adjustment towards “neutral” would be biased by what the adjuster considers neutral.
What did you get and for how much? To me it seems the framework (at least the 16) is only a bit (100-200 out of 1600) more expensive than laptops with similar specs.
One possibility is to allow users to join a controlled allowlist (or a blocklist, though that runs more into that problem), where some actor acts as a trust authority (which the user picks). This keeps the P2P model while still allowing for large networks since every individual doesn’t have to be a “server admin”. A user could also pick several trust authorities.
Essentially, the network would act as a framework for “centralized” groups, while identity remains completely its own.
Out of curiosity, I looked up the numbers. This is correct, they make 9.2 billion per quarter from ads and 10.7 billion from subscriptions. I can’t find expenses per-segment, but in 2023 their total “Cost of revenues” was 37 billion. I doubt everything other than youtube costs less than 17 billion, so they’re definitely making a profit.
Source: https://abc.xyz/assets/95/eb/9cef90184e09bac553796896c633/2023q4-alphabet-earnings-release.pdf