• Anthropic’s new Claude 4 features an aspect that may be cause for concern.
  • The company’s latest safety report says the AI model attempted to “blackmail” developers.
  • It resorted to such tactics in a bid of self-preservation.
  • Plebcouncilman@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    17
    ·
    1 day ago

    I know how LLMs work.

    There’s only one thing you mentioned there that is actually used as a basis to qualify or disqualify sentience: whether it feels or not.

    How do you know it doesn’t feel? How do we define feeling for an entity that is inherently non biological?

    I could make the argument that humans also merely mimic their training data, ie the values and behaviors we are taught by society, parents etc.

    I have not been convinced that they aren’t sentient with this argument.

    • UnculturedSwine@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      13
      ·
      1 day ago

      Feeling is analog and requires an actual nervous system which is dynamic. LLMs exist in a static state that is read from and processed algorithmically. It is only a simulacrum of life and feeling. It only has some of the needed characteristics. Where that boundary exists though is hard to determine I think. Admittedly we still don’t have a full grasp of what consciousness even is. Maybe I’m talking out my ass but that is how I understand it.

      • iopq@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        17
        ·
        1 day ago

        You just posted random words like dynamic without explanation

          • iopq@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            8 hours ago

            Not in a hand wavy way from the last post. I understand that Python is dynamically typed, which would have nothing to do with the topic

        • enkers@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          5
          ·
          edit-2
          21 hours ago

          Not them, but static in this context means it doesn’t have the ability to update its own model on the fly. If you want a model to learn something new, it has to be retrained.

          By contrast, a animal brain is dynamic because it reinforces neural pathways that get used more.

    • Mirodir@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      1
      ·
      1 day ago

      Different person here.

      For me the big disqualifying factor is that LLMs don’t have any mutable state.

      We humans have a part of our brain that can change our state from one to another as a reaction to input (through hormones, memories, etc). Some of those state changes are reversible, others aren’t. Some can be done consciously, some can be influenced consciously, some are entirely subconscious. This is also true for most animals we have observed. We can change their states through various means. In my opinion, this is a prerequisite in order to feel anything.

      Once we use models with bits dedicated to such functionality, it’ll become a lot harder for me personally to argue against them having “feelings”, especially because in my worldview, continuity is not a prerequisite, and instead mostly an illusion.

      • Plebcouncilman@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        1 day ago

        This sounds like a good one but I don’t think I’m fully grasping what you mean. Do you mean like if we subject a person to torture, after the ordeal they are forever changed and now have trauma, PTSD etc?

        I don’t think LLMs will ever have feelings as we define them though. Or more specifically I don’t think feelings is a pre-requisite necessarily. We could have them simulate feelings and if they themselves buy into the simulation there’s no functional difference between not having them but not all LLMs will have this “ability” presumably as its utility is questionable I guess. But again, animals are sentient and they don’t all have the same range of emotions as we do. Or at least they don’t exhibit them in a way that we can appreciate them.

    • theparadox@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 day ago

      Yes, both systems - the human brain and an LLM - assimilate and organize human written languages in order to use it for communication. An LLM is very little else beyond this. It is then given rules (using those written languages) and then designed to create more related words when given input. I just don’t find it convincing that an ML algorithm designed explicitly to mimic human written communication in response to given input “understands” anything. No matter *how convincingly" an algorithm might reproduce a human voice - perfectly matching intonation and inflexion when given text to read - if I knew it was an algorithm designed to do it as convincingly as possible I wouldn’t say it was capable of the feeling it is able to express.

      The only thing in favor of sentience is that the ML algorithms modify themselves and end up being a black box - so complex with no way to represent them that they are impossible for humans to comprehend. Could it somehow have achieved sentience? Technically, yes, because we don’t understand how they work. We are just meat machines, after all.