The onrushing AI era was supposed to create boom times for great gadgets. Not long ago, analysts were predicting that Apple Intelligence would start a “supercycle” of smartphone upgrades, with tons of new AI features compelling people to buy them. Amazon and Google and others were explaining how their ecosystems of devices would make computing seamless, natural, and personal. Startups were flooding the market with ChatGPT-powered gadgets, so you’d never be out of touch. AI was going to make every gadget great, and every gadget was going to change to embrace the AI world.

This whole promise hinged on the idea that Siri, Alexa, Gemini, ChatGPT, and other chatbots had gotten so good, they’d change how we do everything. Typing and tapping would soon be passé, all replaced by multimodal, omnipresent AI helpers. You wouldn’t need to do things yourself; you’d just tell your assistant what you need, and it would tap into the whole world of apps and information to do it for you. Tech companies large and small have been betting on virtual assistants for more than a decade, to little avail. But this new generation of AI was going to change things.

There was just one problem with the whole theory: the tech still doesn’t work. Chatbots may be fun to talk to and an occasionally useful replacement for Google, but truly game-changing virtual assistants are nowhere close to ready. And without them, the gadget revolution we were promised has utterly failed to materialize.

In the meantime, the tech industry allowed itself to be so distracted by these shiny language models that it basically stopped trying to make otherwise good gadgets. Some companies have more or less stopped making new things altogether, waiting for AI to be good enough before it ships. Others have resorted to shipping more iterative, less interesting upgrades because they have run out of ideas other than “put AI in it.” That has made the post-ChatGPT product cycle bland and boring, in a moment that could otherwise have been incredibly exciting. AI isn’t good enough, and it’s dragging everything else down with it.

Archive link: https://archive.ph/spnT6

  • bamboo@lemm.ee
    link
    fedilink
    English
    arrow-up
    9
    ·
    23 hours ago

    Detecting a hallucination programmatically is the hard part. What is truth? Given an arbitrary sentence, how does one accurately measure the truthfulness of it? What about the edge cases, like a statement that is itself true but misrepresents something? Or what if a statement is correct in a specific context, but generally incorrect?

    I’m an AI optimist but I don’t see hallucinations being solved completely as long as LLMs are statistical models of languages, but we’ll probably have a set of heuristics and techniques that can catch 90% of them.

    • SkyeStarfall@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      5
      ·
      19 hours ago

      I mean, in the end, I think it’s literally an unsolvable problem of intelligence. It’s not like humans don’t “hallucinate” ourselves. Fundamentally your information processing is only as good as the information you get in, and if the information is wrong, you’re going to be wrong. Or even just mistakes. We make mistakes constantly, and we’re the most intelligent beings we know of in the universe.

      The question is what issue exactly we’re attempting to solve regarding AI. It’s probably more useful to reframe it as “The AI not lying/giving false information when it should know better/has enough information to know the truth”. Though, even that is a higher bar than we humans set for ourselves

      • Pennomi@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        18 hours ago

        Yeah, like, have you ever met one of those crazy guys who think the pyramids were literally built by aliens? Humans can get caught in a confidently wrong state as well.