- Anthropic’s new Claude 4 features an aspect that may be cause for concern.
- The company’s latest safety report says the AI model attempted to “blackmail” developers.
- It resorted to such tactics in a bid of self-preservation.
- Anthropic’s new Claude 4 features an aspect that may be cause for concern.
- The company’s latest safety report says the AI model attempted to “blackmail” developers.
- It resorted to such tactics in a bid of self-preservation.
I don’t consider that relevant to sentience. Structurally, biological systems change based on inputs. LLMs cannot. I consider that plasticity to be a prerequisite to sentience. Others may not.
We will undoubtedly see systems that can incorporate some kind of learning and mutability into LLMs. Re-evaluating after that would make sense.