ChatGPT generates cancer treatment plans that are full of errors — Study finds that ChatGPT provided false information when asked to design cancer treatment plans::Researchers at Brigham and Women’s Hospital found that cancer treatment plans generated by OpenAI’s revolutionary chatbot were full of errors.

  • amki@feddit.de
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    2
    ·
    1 year ago

    Might be true for you but most people do have a concept of true and false and don’t just dream up stuff to say.

    • markr@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Actually we ‘dream up’ things to say quite a lot. As in our unconscious functions are far more important to our mental processes than we like to admit. Also we are basically not very good at evaluating the truth value of complex expressions.

    • eggymachus@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Yeah, I was probably a bit too caustic, and there’s more to (A)GI than an LLM can achieve on its own, but I do believe that some, and perhaps a large, part of human consciousness works in a similar manner.

      I also think that LLMs can have models of concepts, otherwise they couldn’t do what they do. Probably also of truth and falsity, but perhaps with a lack of external grounding?