• helenslunch@feddit.nl
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      5 months ago

      This is so fucking stupid. All Gemini (and every other LLM) does is find info on the internet and regurgitate it back to you. The LLM part just does it’s best to narrow it down to the specific answer you’re looking for much more quickly and summarize it. It has no way to disseminate fact from fiction.

      Same goes for the Google search engine. Does it suck? Yes. Is it worse than it used to be? Yes. Is that Google’s fault? Not necessarily, it’s moreso the rise of AI just uploading mountains of bullshit to the internet. Every other search engine has the same problems.

      Although I am in no way advocating for the use of Google search…

    • L_Acacia@lemmy.one
      link
      fedilink
      English
      arrow-up
      0
      arrow-down
      1
      ·
      5 months ago

      Too be fair to Gemini, even though it is worse than Claude and Gpt. The weird answer were caused by bad engineering and not by bad model training. They were forcing the incorporattion off the Google search results even though the base model would most likely have gotten it right.