• Zarxrax@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    ·
    4 months ago

    This bill seems somewhat misguided. How in the hell is something like a large language model going to cause a mass casualty incident? What I am more worried about is things that could more realistically pose a danger. What if robotic dogs patrolling the border have machine guns mounted on their backs, then a child does something unexpected and the robot wipes out an entire family? What if a self driving car suddenly takes off at full speed through a parade? They are trying to slot AI into everything now, and it will inevitably end up in some places that are going to cause loss of life. But chatbots? Give me a break.

    • technocrit@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      2
      ·
      edit-2
      4 months ago

      You gonna understand the state is run by paranoid sociopaths. They’ll dream up any delusional scenario, then use it as an excuse for more surveillance, prisons, wars, control, etc.

      For example, imagine somebody hacks a major social platform and sends a fake message from AI/deepfake Trump to thousand of chuds inciting some kind of fascist terrorism. It might sound unrealistic but what if?!?!?! I could imagine something similar happening with current tech. (I think it’s part of why they’re trying to ban TikTok.)

      In general I feel like “AI” is almost entirely lies, hype, grifting, etc. But I could imagine some scenarios that the state might want to disincentivize.