![](/static/253f0d9b/assets/icons/icon-96x96.png)
![](https://fry.gs/pictrs/image/c6832070-8625-4688-b9e5-5d519541e092.png)
It seems to me like a MITM hacker can just redirect all requests to a Blockchain node towards their malicious node.
It seems to me like a MITM hacker can just redirect all requests to a Blockchain node towards their malicious node.
Actually, that’s not quite as clear.
The conventional wisdom used to be, (normal) porn makes people more likely to commit sexual abuse (in general). Then scientists decided to look into that. Slowly, over time, they’ve become more and more convinced that (normal) porn availability in fact reduces sexual assault.
I don’t see an obvious reason why it should be different in case of CP, now that it can be generated.
I see there an access violation…
What social contract? When sites regularly have a robots.txt
that says “only Google may crawl”, and are effectively helping enforce a monolopy, that’s not a social contract I’d ever agree to.
That’s not very deep. Closer to plain old logistic regression, really.
Never tried magit, but it doesn’t matter. It couldn’t possibly be good enough to be worth using an inferior editor.
The ease with which I can only commit separate hunks with lazygit has ensured I use it for commits, too. And once I’ve opened it to do the commit, I may as well also press P
.
Learning git is very easy. For example, to do it on Debain, one simply needs to run, sudo apt install lazygit
I think calling it “dangerous” in quotes is a bit disingenuous - because there is real potential for danger in the future - but what this article seems to want is totally not the way to manage that.
I would say the risk of having AI be limited to the ruling elite is worse, though - because there wouldn’t be everyone else’s AI to counter them.
And if AI is limited to a few, those few WILL become the new ruling elite.
Since I don’t think this analogy works, you shouldn’t stop there, but actually explain how the world would look like if everyone had access to AI technology (advanced enough to be comparable to a nuke), vs how it would look like if only a small elite had access to it.
competition too intense
dangerous technology should not be open source
So, the actionable suggestions from this article are: reduce competition and ban open source.
I guess what it is really about, is using fear to make sure AI remains in the hands of a few…
As much as people here laugh - because yes, I get that it’s very unlikely to work - I actually think this would be better for users than the ad-based model most social media use now.
Not exactly. For example, you can’t make the whole thing, GPL snippet included, available under MIT. You can only license your own contribution however you want (in addition to GPL).
That seems a somewhat contrived example. Yes, it can theoretically happen - but in practice it would happen with a library, and most libraries are LGPL (or more permissive) anyway. By contrast, there have been plenty of stories lately of people who wrote MIT/BSD software, and then got upset when companies just took the code to add in their products, without offering much support in return.
Also, there’s a certain irony in saying what essentially amounts to, “Please license your code more permissively, because I want to license mine more restrictively”.
Germany is determined to remove any systems from its telecoms networks
If only Huawei was the only such system…
I hold the opposite opinion in that creatives (I’d almost say individuals only, no companies) own all rights to their work and can impose any limitations they’d like on use. Current copyright law doesn’t extend quite that far though.
I think that point’s worth discussing by itself - leaving aside the AI - as you wrote it quite general.
I came up with some examples:
Taking your statement at face value - the answers should be: no (I can’t decorate), yes (it’s a valid restriction), and no (I can’t use it to illustrate my argument). But maybe you didn’t mean it quite that strict? What do you think on each example and why?
Except it’s not a collection of stories, it’s an amalgamation - and at a very granular level at that. For instance, take the beginning of a sentence from the middle of first book, then switch to a sentence in the 3-rd, then finish with another part of the original sentence. Change some words here and there, add one for good measure (based on some sentence in the 7-th book). Then fix the grammar. All the while, keeping track that there’s some continuity between the sentences you’re stringing together.
That counts as “new” for me. And a lot of stuff humans do isn’t more original.
Just now, I tried to get Llama-2 (I’m not using OpenAI’s stuff cause they’re not open) to reproduce the first few paragraphs of Harry Potter and the philosophers’ stone, and it didn’t work at all. It created something vaguely resembling it, but with lots of made-up stuff that doesn’t make much sense. I certainly can’t use it to read the book or pirate it.
I’m somewhat skeptical. What if LetsEncrypt decided to misbehave tomorrow? Would the browsers have the guts to shut it down and break all sites using it?