That is exactly what it doesn’t. There is no “understanding” and that is exactly the problem. It generates some output that is similar to what it has already seen from the dataset it’s been fed with that might correlate to your input.
That is exactly what it doesn’t. There is no “understanding” and that is exactly the problem. It generates some output that is similar to what it has already seen from the dataset it’s been fed with that might correlate to your input.
Which is exactly the problem people think has been solved but isn’t anywhere near being solved. It cannot comprehend semantics, the meaning of things is completely beyond it and all other AIs.
Unfortunately saying I made a thing that creates vaguely human looking speech with little content isn’t astonishing to most people hence they are looking for something useful this breakthrough machine must be able to do and then they don’t find anything leading to these articles.
Might be true for you but most people do have a concept of true and false and don’t just dream up stuff to say.
An algorithm suggesting things you might like doesn’t have to be AI. There are simple metrics to achieve that (e.g. things other people who liked this also liked)
Or are we calling every algorithm AI now?
The implied difference is if someone or even you know how the algorithm works, which for “new” is relatively straightforward.
The analogy is that you buy a car (because if it breaks, the car and your entertainment stuff, you will buy a new one to replace it, you will also carry all maintenance) but suddenly you can’t drive backwards anymore because the manufacturer decided retroactively that you should pay extra for that (possibly in a subscription).
I would say it is your good right then to make your car drive backwards regardless of what it may take.