I find it helpful sometimes to write down the negative shit then wad it up and throw it away or burn the paper.
I find it helpful sometimes to write down the negative shit then wad it up and throw it away or burn the paper.
That’s my biggest problem with the whole “burn it all down” mindset so many people espouse on this platform. If we burn down society, we aren’t gonna magically have a utopia. We’re still, at best, the same as folks during the so-called dark ages. Likely, we still basically tribal hunter-gatherers, and the only reason we have any semblance of modern life is because we put so much work into maintaining and improving it generation by generation.
Less good than helping the shooter before they went over the Rubicon…I think that’s the point.
LLMs are not general AI. They are not intelligent. They aren’t sentient. They don’t even really understand what they’re spitting out. They can’t even reliably do the 1 thing computers are typically very good at (computational math) because they are just putting sequences of nonsense (to them) characters together in the most likely order based on their training model.
When LLMs feel sentient or intelligent, that’s your brain playing a trick on you. We’re hard-wired to look for patterns and group things together based on those patterns. LLMs are human-speech prediction engines, so it’s tempting and natural to group them with the thing they’re emulating.
They fired 12 employees of a workforce numbering over 216,000. Looks like they fired 1000x more employees (literally…12000) last year just because “that’s business.” What a nothingburger.
Judaism, Christianity, and Islam all have the same fake man and in the sky. They’re fighting over who his message boys were…but that really isn’t very relevant in the current situation.
Sometimes, you just wanna punch yourself in the dick. It won’t fix your problems, but it’ll make em feel less important for a few minutes.
My understanding is most odd us do have these patterns, we just can’t see them.
A faulty sensor…A faulty senso responsibility, amirite?
Sometimes, it’s the crime ,and* the cover-up
AI isn’t giving the right misinformation
Same here. It’s good for writing your basic unit tests, and the explain feature is useful getting for getting your head wrapped around complex syntax, especially as bad as searching for useful documentation has gotten on Google and ddg.
Everyone I know who’s interested in raw milk probably has a few crates of ivermectin left over from the pandemic…should be plenty to keep them safe from the flu, too. /s
It’s all a joke
Have you considered the possibility that you’re a computer?
What would be better is polluting the software with invalid but still plausible constraints, so the chips would seem OK and might work for days or weeks but would fail in the field… especially if these chips are used in weapon systems or critical infrastructure.
It’s a pretty big presumption that Elon Musk is providing transparent and accurate information to consumers about a technology he’s hoping to sell. While I’d agree with the premise normally, he’s kind of a known bad actor at this point. I’m a pretty firm believer in informed consent for this kinda stuff, I just don’t see much reason to trust Musk is willing to fully inform someone of the limitations, constraints or risks involved in anything he has a personal stake in. If you aren’t informed, you can’t provide consent.
I have, completely by accident and with no significant effort on my part, gone 40 years without mass murdering. It almost just happens on its own.
Yeah, Trump certainly helped stabilize the area by moving the Embassy. And that mid-east peace plan his SIL got paid $2 billion for was worth every cent.
Your insurance company isn’t just fucking you with premiums, they also expect the guys that come and fix things up after a disaster to lose money doing it, 0 overhead, 0 profit