![](/static/253f0d9b/assets/icons/icon-96x96.png)
![](https://lemmy.ml/pictrs/image/l5Wbo7g3XT.png)
He is still using 100% of the brain he has though.
He is still using 100% of the brain he has though.
Sorry, I thought that we might be underestimating the factor of “slower”, but I couldn’t quickly find numbers to prove my point. I might be wrong after all. I wish you a good night. 😊
You don’t need to do it often, but initial training requires huge ressources and someone has to do it, if you want to create new models from scratch. And for this you need your compute packed as close as possible.
Network latency will make distributed training a very time-consuming task.
Meet my friend: .unwrap()
TheLounge is a neat web client
I now imagine the machine long time doing nothing and then spitting out “This is taking too long, i am going home!”.
So you are exempt from the laws of physics?
So you believe, that if you would rewind time to a specific choice you made, you would be able to make a different choice, even though your brain and your surroundings are in the exact same state as before? Or do you believe your choices to be originating from somewhere else than your brain?
No problem, quality will continue to degrade, until you will be happy to switch.
Also the placebo effect is harder to tackle in this instance as people are often able to tell if they actually got psilocybin administered or not.
Fuck them. I hope everyone for whom it is feasible switches to something like Godot.
No vendor look-in with his solution though.
LLMs don’t state facts, they are just fancy calculators for language. If you use them, as if they were a database of facts, you will make a fool out of yourself. Like stating that the guy who destroyed german science was an effective leader.
New security guideline just dropped: frequently rotate your keyboard layout.
Feeling old today?
Like the planet solaria: https://fandom.adminforge.de/asimov/wiki/Solaria