Police in England installed an AI camera system along a major road. It caught almost 300 drivers in its first 3 days.::An AI camera system installed along a major road in England caught 300 offenses in its first 3 days.There were 180 seat belt offenses and 117 mobile phone

  • Aurenkin@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    63
    arrow-down
    3
    ·
    1 year ago

    I think it’s a pretty good idea, the AI does a first pass, flags potential violations and sends them to a human for review. It’s not like they are just sending people fines directly based on the AI output.

    • the_sisko@startrek.website
      link
      fedilink
      English
      arrow-up
      17
      arrow-down
      3
      ·
      1 year ago

      I’m definitely a fan of better enforcement of traffic rules to improve safety, but using ML* systems here is fraught with issues. ML systems tend to learn the human biases that were present in their training data and continue to perpetuate them. I wouldn’t be shocked if these traffic systems, for example, disproportionately impact some racial groups. And if the ML system identifies those groups more frequently, even if the human review were unbiased (unlikely), the outcome would still be biased.

      It’s important to see good data showing these systems are fair, before they are used in the wild. I wouldn’t support a system doing this until I was confident it was unbiased.

      • it’s all machine learning - NOT artificial intelligence. No intelligence involved, just mathematical parameters “learned” by an algorithm and applied to new data.
      • Dojan@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        ·
        1 year ago

        I think the lack of transparency about data, both the one used for training, and the actual statistics of the model itself is pretty worrying.

        There needs to be regulations around that, because you can’t expect companies to automatically be transparent and forthcoming if they have something to gain by not being so.

      • Aurenkin@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        1
        ·
        1 year ago

        This is a really important concern, thanks for bringing it up. I’d really like to know more about what they are doing in this case to try and combat that. Law enforcement in particular feels like an application where managing bias is extremely important.

        • the_sisko@startrek.website
          link
          fedilink
          English
          arrow-up
          10
          ·
          1 year ago

          I would imagine the risk of bias here is much lower than, for example, the predictive policing systems that are already in use in US police departments. Or the bias involved in ML models for making credit decisions. 🙃

      • Lmaydev@programming.dev
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        1
        ·
        1 year ago

        Machine learning is a type of artificial intelligence. As is a pathfinding algorithm in a game.

        Neural networks were some of the original AI systems dating back decades. Machine learning is a relatively new term for it.

        AI is an umbrella term for anything that mimics intelligence.

        • EndlessApollo@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          3
          ·
          1 year ago

          There’s nothing intelligent about it. It’s no smarter than a chatbot or a phone’s autocorrect. It’s a buzzword applied to it by tech bros that want to make a bunch of money off it

          • Lmaydev@programming.dev
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            1
            ·
            1 year ago

            Indeed. That’s why it’s called artificial intelligence.

            Anything that attempts to mimic intelligence is AI.

            The field was established in the 50s.

            Your definition of it is wrong I’m afraid.

            • EndlessApollo@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              1
              ·
              1 year ago

              The only people that call it that are people who don’t get what AI actually is or don’t want to know because they think it’s the future. There is exactly nothing intelligent about it. Stop spreading tech bro bullshit, call it machine learning bc that’s what it actually is. Or are you really drinking the ML kool-aid hard enough that this is your hill to die on? It’s not even as intelligent as a parrot that’s learned to recognize colors and materials, it’s literally just a souped up cleverbot

              • Lmaydev@programming.dev
                link
                fedilink
                English
                arrow-up
                1
                arrow-down
                1
                ·
                edit-2
                1 year ago

                Literally the definition my friend. You just don’t know what the term is refering to.

                Artificial intelligence (AI) is the intelligence of machines or software, as opposed to the intelligence of human beings or animals. AI applications include advanced web search engines (e.g., Google Search), recommendation systems (used by YouTube, Amazon, and Netflix), understanding human speech (such as Siri and Alexa), self-driving cars (e.g., Waymo), generative or creative tools (ChatGPT and AI art), and competing at the highest level in strategic games (such as chess and Go).[1]

                Artificial intelligence was founded as an academic discipline in 1956.[2] The field went through multiple cycles of optimism[3][4] followed by disappointment and loss of funding,[5][6] but after 2012, when deep learning surpassed all previous AI techniques,[7] there was a vast increase in funding and interest.

                The various sub-fields of AI research are centered around particular goals and the use of particular tools. The traditional goals of AI research include reasoning, knowledge representation, planning, learning, natural language processing, perception, and support for robotics.[a] General intelligence (the ability to solve an arbitrary problem) is among the field’s long-term goals.[8] To solve these problems, AI researchers have adapted and integrated a wide range of problem-solving techniques, including search and mathematical optimization, formal logic, artificial neural networks, and methods based on statistics, probability, and economics.[b] AI also draws upon psychology, linguistics, philosophy, neuroscience and many other fields.[9]

                • EndlessApollo@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  arrow-down
                  1
                  ·
                  edit-2
                  1 year ago

                  I know exactly what AI is referring to. It’s referring to a process that has no intelligence behind it. There is no “field of AI” it’s a blatant misnomer, just like when they came up with “hoverboards” that still had wheels. Stop being a tech bro before you embarrass yourself and brag about your bored apes or some shit

    • Rooki@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      edit-2
      1 year ago

      They will tho. In the future. And because its a camera (black and white) there will be many false positives. And this is what the normal driver should fear. That the police just say yeah everything fineeeee and let the ai loose, this is just the first step into. I really doubt it would RELIABLY detect seat belt offenses.

      • LifeInMultipleChoice@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        1 year ago

        See this is why I only drive when I am drinking a 20oz coffee and eating a footlong sub. That way when AI acuses me of being distracted from being on the phone, the human it gets sent off to for review will be like “oh no, he was simply balancing a sandwich on his lap while he took the lid off to blow on his coffee so it wasn’t to hot, the AI must have thought the lid was a phone.”

        Besides, it also ensures I use a handfree device for my phone because face it… I don’t have any free hands, I’m busy trying to find where that marinara sauce fell on my shirt when I was eating the last bite of meatball sub. (Add pepperoni and buffalo sauce) Have to stay legal after all.