Johnson’s position on Ukraine is one of the few things where he did in fact show some strong leadership. Couldn’t fault him for that.
He failed in a lot of other areas unfortunately.
Johnson’s position on Ukraine is one of the few things where he did in fact show some strong leadership. Couldn’t fault him for that.
He failed in a lot of other areas unfortunately.
Morality is a product of civilisation and community. It’s the ability of groups to decide on a single set of rules by which they would lime to be treated by, as breach of those rules can cause physical or emotional harm. And then there’s simple evolution, where certain “moral rules” allowed civilisations to survive and thrive better than others.
At no point is “god” required here.
Currencies going up in value tends to not be great for an economy, as people will save instead of spend. It stops being a currency and becomes somewhat of an asset. A slowly depreciating currency tends to foster the most economic growth.
Our company has directly profited from a competitor that leaked sensitive data, because some of their large corporate customers decided to switch to us.
Business don’t like being on the receiving end of a data leak either you know.
I think you’re being too pessimistic about IT security, particularly in the Financial sector. A lot of the security rules and audits aren’t even government-run, it’s the sector regulating itself. And trust me, they are pretty thorough and quite nitpicky about stuff.
The cost of failing an audit also often isn’t even a fine, it’s direct exclusion from a payment scheme. Basically, do it right or don’t do it at all. Given that that is a strict requirement for staying in business, most of these companies will have sufficiently invested in IT security.
Of course it’s not airtight, no system really is. But particularly in the financial sector most companies really do have their IT security in order.
That’s not entirely true. In order to be allowed to keep processing transactions you have to adhere to strict rules which do get regularly audited. And then there’s the whole “customers will switch to another more reliable party in case of outages or security problems”. And trust me, I’ve seen first-hand that they do.
Not Tesla though, it relies on cameras only.
Doesn’t higher interests mean more money is spent paying those interests, meaning less money is available to spend on other things which in turn reduces the monetary supply in circulation which curbs inflation?
Would they? The XZ utils backdoor was only discovered by what can only be described as an insanely attentive developer who happened to be testing something unrelated and who happened to notice a small increase in the startup time of the library, and was curious enough to go and figure out why.
Open does not mean “can’t be backdoored”.
I meant a library unknown to me specifically. I do encounter hallucinations every now and then but usually they’re quickly fixable.
It’s made me a little bit faster, sometimes. It’s certainly not like a 50-100% increase or anything, maybe like a 5-10% at best?
I tend to write a comment of what I want to do, and have Copilot suggest the next 1-8 lines for me. I then check the code if it’s correct and fix it if necessary.
For small tasks it’s usually good enough, and I’ve already written a comment explaining what the code does. It can also be convenient to use it to explore an unknown library or functionality quickly.
Terrorism is more about the intent rather than the result. Did Israel intend to instill terror in the civilian population or did they genuinely try to target Hezbollah militants (and perhaps didn’t care much about any civilian casualties)?
In general, you should pay for content that you’re going to use commercially
Sure, but merely linking to a page isn’t reusing the content. If said content was being embedded, rehashed or otherwise shown then a compensation would be fair. But merely linking to a page should absolutely be free. That’s a massively important cornerstone of the internet that shouldn’t be compromised on.
Linking directs traffic which can be monetized by the website itself, it shouldn’t require additional fees on top.
Eh, I have a few things from Kickstarter that were successful. Exploding Kittens is probably the most successful one of all the ones I own.
Isn’t Umbraco the one that struggled loading a page that didn’t exist, taking several seconds to load the PageNotFound page and causing very high CPU load in the meantime? Like, an issue they had for years?
Somehow I don’t have great faith in that solution, but perhaps it’s improved in recent years.
RFCs aren’t really law you know. They can deviate, it just means less compatibility.
If producing an AGI is intractable, why does the human meat-brain exist?
Ah, but here we have to get pedantic a little bit: producing an AGI through current known methods is intractable.
The human brain is extremely complex and we still don’t fully know how it works. We don’t know if the way we learn is really analogous to how these AIs learn. We don’t really know if the way we think is analogous to how computers “think”.
There’s also another argument to be made, that an AGI that matches the currently agreed upon definition is impossible. And I mean that in the broadest sense, e.g. humans don’t fit the definition either. If that’s true, then an AI could perhaps be trained in a tractable amount of time, but this would upend our understanding of human consciousness (perhaps justifyingly so). Maybe we’re overestimating how special we are.
And then there’s the argument that you already mentioned: it is intractable, but 60 million years, spread over trillions of creatures is long enough. That also suggests that AGI is really hard, and that creating one really isn’t “around the corner” as some enthusiasts claim. For any practical AGI we’d have to finish training in maybe a couple years, not millions of years.
And maybe we develop some quantum computing breakthrough that gets us where we need to be. Who knows?
This is a gross misrepresentation of the study.
That’s as shortsighted as the “I think there is a world market for maybe five computers” quote, or the worry that NYC would be buried under mountains of horse poop before cars were invented.
That’s not their argument. They’re saying that they can prove that machine learning cannot lead to AGI in the foreseeable future.
Maybe transformers aren’t the path to AGI, but there’s no reason to think we can’t achieve it in general unless you’re religious.
They’re not talking about achieving it in general, they only claim that no known techniques can bring it about in the near future, as the AI-hype people claim. Again, they prove this.
That’s a silly argument. It sets up a strawman and knocks it down. Just because you create a model and prove something in it, doesn’t mean it has any relationship to the real world.
That’s not what they did. They provided an extremely optimistic scenario in which someone creates an AGI through known methods (e.g. they have a computer with limitless memory, they have infinite and perfect training data, they can sample without any bias, current techniques can eventually create AGI, an AGI would only have to be slightly better than random chance but not perfect, etc…), and then present a computational proof that shows that this is in contradiction with other logical proofs.
Basically, if you can train an AGI through currently known methods, then you have an algorithm that can solve the Perfect-vs-Chance problem in polynomial time. There’s a technical explanation in the paper that I’m not going to try and rehash since it’s been too long since I worked on computational proofs, but it seems to check out. But this is a contradiction, as we have proof, hard mathematical proof, that such an algorithm cannot exist and must be non-polynomial or NP-Hard. Therefore, AI-learning for an AGI must also be NP-Hard. And because every known AI learning method is tractable, it cannor possibly lead to AGI. It’s not a strawman, it’s a hard proof of why it’s impossible, like proving that pi has infinite decimals or something.
Ergo, anyone who claims that AGI is around the corner either means “a good AI that can demonstrate some but not all human behaviour” or is bullshitting. We literally could burn up the entire planet for fuel to train an AI and we’d still not end up with an AGI. We need some other breakthrough, e.g. significant advancements in quantum computing perhaps, to even hope at beginning work on an AGI. And again, the authors don’t offer a thought experiment, they provide a computational proof for this.
This article was amended on 14 September 2023 to add an update to the subheading. As the Guardian reported on 12 September 2023, following the publication of this article, Walter Isaacson retracted the claim in his biography of Elon Musk that the SpaceX CEO had secretly told engineers to switch off Starlink coverage of the Crimean coast.
IIRC Musk didn’t switch it off, it wasn’t turned on in the first place and Musk refused to turn it on when the Ukrainian military reqeusted it.
Musk is a shithead but not for this reason.
Nothing lasts forever. But for now, it’s decent enough.