It probably doesn’t matter from a popular perception standpoint. The talking point that AI burns massive amounts of coal for each deepfake generated is now deeply ingrained, it’ll be brought up regularly for years after it’s no longer true.
Basically a deer with a human face. Despite probably being some sort of magical nature spirit, his interests are primarily in technology and politics and science fiction.
Spent many years on Reddit and then some time on kbin.social.
It probably doesn’t matter from a popular perception standpoint. The talking point that AI burns massive amounts of coal for each deepfake generated is now deeply ingrained, it’ll be brought up regularly for years after it’s no longer true.
We are talking specifically about OpenAI, though.
Well, my point is that it’s already largely irrelevant what they do. Many of their talented engineers have moved on to other companies, some new startups and some already-established ones. The interesting new models and products are not being produced by OpenAI so much any more.
I wouldn’t be surprised if “safety alignment” is one of the reasons, too. There are a lot of folks in tech who really just want to build neat things and it feels oppressive to be in a company that’s likely to lock away the things they build if they turn out to be too neat.
OpenAI is no longer the cutting edge of AI these days, IMO. It’ll be fine if they close down. They blazed the trail, set the AI revolution in motion, but now lots of other companies have picked it up and are doing better at it than them.
They don’t use GPUs, they use more specialized devices like the H100.
Not to mention that technology is continuing to advance in new and unexpected ways.
We’re getting close to artificial womb technology, for example. There are already artificial wombs that are being experimented with as a way to save extremely premature babies that wouldn’t survive in a conventional incubator, for example.
Commodity humanoid robots are also in development, and AI has taken surprisingly rapid leaps in development over the past two years.
I could see a possibility where in a couple of decades a human baby could be born from an artificial womb and raised to adulthood entirely by machines, if we really really needed to for some reason. Embryo space colonization is the usual example given, but it could also potentially work as a way to counter population decline due to people simply not wanting to do their own birthing and child-rearing.
The main problem with adding your own page is ensuring that the “no original research” rule is followed. In principle, everything on Wikipedia should be verifiable by third parties so they can check it. So if you write an article about yourself and say “Their dog’s name is Chesterfield” there needs to be some kind of external source that other editors can use to check whether that’s true. People writing about themselves often overlook that sort of thing. A classic example is a problem Philip Roth had trying to correct a Wikipedia article about a book he’d written, Wikipedia can’t simply “take his word for it.”
The other major problem is the “neutral point of view” rule. It’s very difficult to write about yourself in a neutral manner so it’s a safe assumption to scrutinize the neutrality of one’s own edits about oneself very closely.
Probably the best way to go if you’re notable is to ensure that you’ve got a detailed biography of yourself published somewhere and then point Wikipedia editors at it. And don’t get possessive about your Wikipedia article, it’s likely going to end up saying something you didn’t want it to say and there’s not a lot you can do about that if it’s within their rules.
If someone wants to pay me to upvote them I’m open to negotiation.
A lot of people are keen to hear that AI is bad, though, so the clicks go through on articles like this anyway.
Aha, so this must all be Elon’s fault! And Microsoft!
There are lots of whipping boys these days that one can leap to criticize and get free upvotes.
img2img is not “training” the model. Completely different process.
You realize that those “billions of dollars” have actually resulted in a solution to this? “Model collapse” has been known about for a long time and further research figured out how to avoid it. Modern LLMs actually turn out better when they’re trained on well-crafted and well-curated synthetic data.
Honestly, everyone seems to assume that machine learning researchers are simpletons who’ve never used a photocopier before.
Workarounds for those sorts of limitations have been developed, though. Chain-of-thought prompting has been around for a while now, and I recall recently seeing an article about a model that had that built right into it; it had been trained to use <thought></thought> tags to enclose invisible chunks of its output that would be hidden from the end user but would be used by the AI to work its way through a problem. So if you asked it whether cats had feathers it might respond “<thought>Feathers only grow on birds and dinosaurs. Cats are mammals.</thought> No, cats don’t have feathers.” And you’d only see the latter bit. It was a pretty neat approach to improving LLM reasoning.
And they’re overlooking that radionuclide contamination of steel actually isn’t much of a problem any more, since the surge in background radionuclides caused by nuclear testing peaked in 1963 and has since gone down almost back to the original background level again.
I guess it’s still a good analogy, though. People bring up Low Background Steel because they think radionuclide contamination is an unsolved problem (despite it having been basically solved), and they bring up “model collapse” because they think it’s an unsolved problem (despite it having been basically solved). It’s like newspaper stories, everyone sees the big scary front page headline but nobody pays attention to the little block of text retracting it on page 8.
Which is actually a pretty good thing.
I wouldn’t call it a “dud” on that basis. Lots of models come out with lagging support on the various inference engines, it’s a fast-movibg field.
Why does the rule need to be specific to data centers? Why not just try to encourage renewable energy in general?
The intersection of “Luddite hooligan” and “stops to think about technological capabilities and future consequences before vandalizing stuff” is not large.
Makes it all the more amusing how OpenAI staff were fretting about how GPT-2 was “too dangerous to release” back in the day. Nowadays that class of LLM is a mere toy.
AI engineers are not a unitary group with opinions all aligned. Some of them really like money too. Or just want to build something that changes the world.
I don’t know of a specific “when” where a bunch of engineers left OpenAI all at once. I’ve just seen a lot of articles over the past year with some variation of “<company> is a startup founded by former OpenAI engineers.” There might have been a surge when Altman was briefly ousted, but that was brief enough that I wouldn’t expect a visible spike on the graph.