So nothing? Ok
So nothing? Ok
They don’t support new technologies (Wayland), why would they drop support for old ones?
Yes, looks like the actual advantage (or disadvantage , depending on who you are) is ensuring that you don’t send a false location to a third party.
You then execute that SNARK on your local device with your current exact GPS coordinates
No, that’s what I’m suggesting. The proposed method in the paper makes no use of GPS, instead it’s some peer-to-peer network.
You mean the hexagon? What prevents you from mapping your GPS output to a hexagon?
We just re-elected a fascist tyrant who wants to close as many avenues of education and free speech which can be used to educate, organize, and publish against him as he possibly can
Can you point me to instances of Trump closing avenues to education and free speech during his first 4 years? Can’t find any, but I’m curious.
How is this better than just mapping GPS data to a hexagon and sending that to the third-party?
Don’t see the point of this standard which runs over an inferior type of networking
Inferior how? Matter is not comparable to Z-Wave. Z-Wave is a mesh network, Matter is just a standard which would allow Alexa, Siri, Google, etc. to control the same devices. To allow Z-Wave like functionality, Matter is able to work on top of Thread, which is in fact superior to Z-Wave.
is brought to us by the companies that created the interoperability problem in the first place
Of course. You don’t want to be the company known for refusing to participate in an open standard, even if you secretly don’t want it to succeed. Anyways, there’s no reason for companies to not want an open standard for controlling smart devices, since it literally helps everyone support more devices for basically no effort once you add support for Matter.
Do you have source on this? Never heard of it.
What part of the OS should managed the packages?
The OS package manager. This is already a thing with Python in apt and pacman, where it will give you a fat warning if you try to install a package through pip
instead of the actual OS package manager (i.e. pacman -Syu python-numpy
instead of pip install numpy
)
That’s like saying clock rate and core count are fake terms. Sure, by themselves they might not mean much, but they’re part of a system that directly benefits from them being high.
The issue with teraflops metric is that it is inversely proportional (almost linearly) to the bit-length of the data, meaning that teraflops@8-bit is about 2x(teraflops@16-bit). So giving teraflops without specifying the bit-length it comes from is almost useless. Although you could make the argument that 8-bit is too low for modern games and 64-bit is too high of a performance trade off for accuracy gain, so you can assume the teraflops from a gaming company are based on 16-bit/32-bit performance.
This has probably been in the works for years, and RISC-V’s profile (RVA23) for proper user application support was only released a few days ago.
“Hate speech is ok as long as it’s against the people I hate”
Politicians when they realize the commercialized espionage they’ve allowed also applies to them:
Well technically there’s already a few out there, most notably Alibaba (found in DC-ROMA laptop), but these are slow relative to what’s available in other architectures and are there mostly for developers to test the software and make sure it’s ready for RISC-V. But nothing is stopping from buying one and daily driving it, it would just probably be a horrible experience.
And it does not concern you that this RVA profile is version 23
Not sure where you got that information. There are only 5 RISC-V profiles.
And they are incompatible, with version 23 because they lack instructions?
Like all the x86 CPUs from a few years ago that don’t have all the new extensions? Not supporting new extensions doesn’t mean the CPU is useless, only that it’s worse than new ones, as things should be when there’s progress. Or I guess you throw out your x86 CPU every time Intel/AMD create a new instruction?
So a compiler would have to support at least a certain number of those profiles
Do you think compilers only target one x86 version with one set of instructions? For example in x86, there’s SIMD versions SSE, SSE2, SSE3, SSSE3, SSE4, SSE4.1, SSE4.2, compilers support all of them, and that’s literally just for the SIMD instructions. What’s new?
It’s getting there but running a full on PC is such a complex task over micros or special purpose devices.
Design application ready CPUs are hard, but not really for these companies. The main issue was the need for a standard, given how many optional extensions are available for RISC-V. The RVA profiles fix this problem by giving a set of required extensions to be user-mode application ready, and they have been a thing for a while. However, these were lacking one important capability for modern applications: vector extensions. RISC-V already had SIMD support (similar to what x86 has), but the vector extension is so much better there’s really no need to even bother with it except with some microcontrollers .
The RVA23 profile, ratified 4 days ago, addresses this by adding the vector extension to the list of required extensions for an application ready CPU. This should be enough for running modern applications, so maybe we’ll see some nice stuff in the next 1-2 years.
That’s a good thing, meaning you can design RISC-V CPUs without functionality you don’t need (like microcontrollers that only need basic operations). However, for those who want a complete CPU, there are RVA profiles (latest being RVA23), which are a list of extensions required to be an application-ready CPU. So there’s really just 1 “standard” for general purpose computing, everything else is for specialized products.
Steam client needs the XWayland translation layer to work on any modern DE, plus 32-bit libraries (which are not installed by default).