Tech Policy Press is a nonprofit media and community venture intended to provoke new ideas, debate and discussion at the intersection of technology and democracy. We publish opinion and analysis.
Opaqueness and lack of interop are one thing. (Although, I’d say the Lemmy/Reddit comparison is a bit off-base, since those center around user-to-user communication, so prohibition of interop is a bigger deal there.) Data dignity or copyright protection is another thing.
And also there’s the fact that anything can (and will) be called AI these days.
For me, the biggest problem with generative AI is that its most powerful use case is what I’d call “signal-jamming”.
That is: Creating an impression that there is a meaningful message being conveyed in a piece of content, when there actually is none.
It’s kinda what it does by default. So the fact that it produces meaningless content so easily, and even accidentally, creates a big problem.
In the labor market, I think the problem is less that automated processes replace your job outright and more that if every interaction is mediated by AI, it dilutes your power to exert control over how business is conducted.
As a consumer, having AI as the first line of defense in consumer support dilutes how much you can hold a seller responsible for their services.
In the political world, astro-turfing has never been easier.
I’m not sure how much fighting back with your own AI actually helps here.
If we end up just having AIs talk to other AIs as the default for all communication, we’ve pretty much forsaken the key evolutionary feature of our species.
It’s kind of like solving nuclear proliferation by perpetually launching nukes from every country to every country at all times forever.
It’s a multi-faceted problem.
Opaqueness and lack of interop are one thing. (Although, I’d say the Lemmy/Reddit comparison is a bit off-base, since those center around user-to-user communication, so prohibition of interop is a bigger deal there.) Data dignity or copyright protection is another thing.
And also there’s the fact that anything can (and will) be called AI these days.
For me, the biggest problem with generative AI is that its most powerful use case is what I’d call “signal-jamming”.
That is: Creating an impression that there is a meaningful message being conveyed in a piece of content, when there actually is none.
It’s kinda what it does by default. So the fact that it produces meaningless content so easily, and even accidentally, creates a big problem.
In the labor market, I think the problem is less that automated processes replace your job outright and more that if every interaction is mediated by AI, it dilutes your power to exert control over how business is conducted.
As a consumer, having AI as the first line of defense in consumer support dilutes how much you can hold a seller responsible for their services.
In the political world, astro-turfing has never been easier.
I’m not sure how much fighting back with your own AI actually helps here.
If we end up just having AIs talk to other AIs as the default for all communication, we’ve pretty much forsaken the key evolutionary feature of our species.
It’s kind of like solving nuclear proliferation by perpetually launching nukes from every country to every country at all times forever.
Maybe if AI talks to AI for long enough, they get smart 🤔