Driverless cars worse at detecting children and darker-skinned pedestrians say scientists::Researchers call for tighter regulations following major age and race-based discrepancies in AI autonomous systems.

  • Dave@lemmy.nz
    link
    fedilink
    English
    arrow-up
    9
    ·
    edit-2
    1 year ago

    Weird question, but why does a car need to know if it’s a person or not? Like regardless of if it’s a person or a car or a pole, maybe don’t drive into it?

    Is it about predicting whether it’s going to move into your path? Well can’t you just just LIDAR to detect an object moving and predict the path, why does it matter if it’s a person?

    Is it about trolley probleming situations so it picks a pole instead of a person if it can’t avoid a crash?

      • Dave@lemmy.nz
        link
        fedilink
        English
        arrow-up
        11
        ·
        1 year ago

        That seems like the car is relying way too much on video to detect surroundings…

          • Dave@lemmy.nz
            link
            fedilink
            English
            arrow-up
            2
            ·
            1 year ago

            Haha yes, but from the article I got the impression it was across all tested brands. Tesla is being called out at the moment for not having the appropriate hardware that other brands are using (e.g. LIDAR).

    • fresh@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      1
      ·
      1 year ago

      Conant and Ashby’s good regulator theorem in cybernetics says, “Every good regulator of a system must be a model of that system.”

      The AI needs an accurate model of a human to predict how humans move. Predicting the path of a human is different than predicting the path of other objects. Humans can stand totally motionless, pivot, run across the street at a red light, suddenly stop, fall over from a heart attack, be curled up or splayed out drunk, slip backwards on some ice, etc. And it would be computationally costly, inaccurate, and pointless to model non-humans in these ways.

      I also think trolley problem considerations come into play, but more like normativity in general. The consequences of driving quickly amongst humans is higher than amongst human height trees. I don’t mind if a car drives at a normal speed on a tree lined street, but it should slow down on a street lined with playing children who could jump out at anytime.

      • Dave@lemmy.nz
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        1 year ago

        Thanks, you make some good points. (safe) human drivers drive differently in situations with a lot of people in them, and we need to replicate that in self-driving cars.

      • theluddite@lemmy.ml
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        2
        ·
        1 year ago

        Anyone who quotes Ashby et al gets an upvote from me! I’m always so excited to see cybernetic thinking in the wild.

    • duffman@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 year ago

      They need to safely ignore shadows, oil stains on the road, just because there’s contrast on an image doesn’t mean it’s an object.

      • Dave@lemmy.nz
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Sure but why on earth are we relying on cameras to drive cars? Many modern cars have radar, which is far more reliable.

        • duffman@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 year ago

          Natural vision is awesome, it works for billions of humans. We just have nothing close to what the human eyes and brain offers in terms of tech in that spectrum.

          I think it needs to be a combination of sensors since radar sucks in the rain/snow/fog.