Archived link: https://archive.ph/Vjl1M

Here’s a nice little distraction from your workday: Head to Google, type in any made-up phrase, add the word “meaning,” and search. Behold! Google’s AI Overviews will not only confirm that your gibberish is a real saying, it will also tell you what it means and how it was derived.

This is genuinely fun, and you can find lots of examples on social media. In the world of AI Overviews, “a loose dog won’t surf” is “a playful way of saying that something is not likely to happen or that something is not going to work out.” The invented phrase “wired is as wired does” is an idiom that means “someone’s behavior or characteristics are a direct result of their inherent nature or ‘wiring,’ much like a computer’s function is determined by its physical connections.”

It all sounds perfectly plausible, delivered with unwavering confidence. Google even provides reference links in some cases, giving the response an added sheen of authority. It’s also wrong, at least in the sense that the overview creates the impression that these are common phrases and not a bunch of random words thrown together. And while it’s silly that AI Overviews thinks “never throw a poodle at a pig” is a proverb with a biblical derivation, it’s also a tidy encapsulation of where generative AI still falls short.

  • Liberteez@lemm.ee
    link
    fedilink
    English
    arrow-up
    11
    arrow-down
    1
    ·
    14 hours ago

    I am not saying other generative AI lack flaws, but Google’s AI Overview is the most problematic generative AI implementation I have ever seen. It offends me that a company I used to trust continues to force this lie generator as a top result for the #1 search engine. And to what end? Just to have a misinformed populace over literally every subject!

    OpenAI has issues as well, but ChatGPT is a much, much better search engine with far fewer hallucinations per answer. Releasing AI Overview while the competition is leagues ahead on the same front is asinine!

    • Echo Dot@feddit.uk
      link
      fedilink
      English
      arrow-up
      3
      ·
      3 hours ago

      They famously taught it on Reddit. So it’s not surprising that it just comes up with nonsense.

      You would have thought that they would use a more stable data set. Although it does mean it’s very good at explaining the plots of movies badly.

    • Chulk@lemmy.ml
      link
      fedilink
      English
      arrow-up
      5
      ·
      12 hours ago

      And to what end? Just to have a misinformed populace over literally every subject!

      This is a feature; not a bug. We’re entering a new dark age, and generative AI is the tool that will usher it in. The only “problem” generative AI is efficiently solving is a populace with too much access to direct and accurate information. We’re watching as perfectly functional tools and services are being rapidly replaced by a something with inherent issues with reliability, ethics and accountability.

      • Liberteez@lemm.ee
        link
        fedilink
        English
        arrow-up
        2
        ·
        11 hours ago

        In the case with Google AI overview, I 1000% agree. I am not against all AI tools, but that company has clearly chosen evil.

    • klemptor@startrek.website
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      1
      ·
      13 hours ago

      I’ve resorted to appending every Google search with “-ai” because I don’t want to see their bullshit summaries. Outsourcing our thinking is lazy and dangerous, especially when the technology is so flawed.

      • Liberteez@lemm.ee
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        1
        ·
        12 hours ago

        I like that trick, noted! I mostly use DuckDuckGo as a browser and search engine now. If it fails I use ChatGPT

    • Modern_medicine_isnt@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      2
      ·
      12 hours ago

      Saying you used to trust google is really a core part of the problem. Google isn’t a person. Just like AI isn’t a person. They both do what they are tasked with. Companies prioritize profit. AI prioritizes giving an answer, not necessarily a correct one. That is how it was designed.

      • Liberteez@lemm.ee
        link
        fedilink
        English
        arrow-up
        5
        ·
        edit-2
        11 hours ago

        Impressive how we seem to agree with each other yet you still found a way to insult my way of putting it