I’m not making an argument against it, just clarifying were it sits as technology.
As I see it, it’s like electric cars - a technology that was overtaken by something else in the early days when that domain was starting even though it was the first to come out (the first cars were electric and the ICE engine was invented later) and which has now a chance to be successful again because many other things have changed in the meanwhile and we’re a lot closes to the limits of the tech that did got widely adopted back in the early days.
It actually makes a lot of sense to improve the speed of what programming can do by getting it to be capable of also work outside the step-by-step instruction execution straight-jacked which is the CPU/GPU clock.
I would also add that one of the best experiences one can have is to have to maintain one’s own code - coming back 6 months or a year later (long enough to have forgotten the details of it) to the code one has made to add new features is quite the eye opener and a serious incentive for improving the quality of one’s coding because now you’re being burned from all the stupid lazy choices if your past self and also spotting things you did not know were important and whose full impact you were not aware of.
I’ve crossed paths with people who for one reason or another only ever did brand new programs from the ground up and their code tends to be messy, hard to understand and prone to break in unexpected ways and places when people try to add or change features in it.