Was this AI trained on an unbalanced data set? (Only black folks?) Or has it only been used to identify photos of black people? I have so many questions: some technical, some on media sensationalism

  • DavidGarcia@feddit.nl
    link
    fedilink
    arrow-up
    33
    ·
    1 year ago

    Putting any other issues aside for a moment, I’m not saying they’re not true also. Cameras need light to make photos, the more light they get, the better the image quality. Just look at astronomy, we don’t find the dark astetoids/planets/stars first, we find the ones that are the brightest and we know more about them than about a planet with lower albedo/light intensity. So it is literally physically harder to collect information about anything black, that includes black people. If you have a person with a skin albedo of 0.2 vs one with 0.6, you get 3x less information in the same amount of time all things being equal.

    And also consider that cameras have a limited dyanmic range and white skin might often be much closer to most objects around us than black skin. So if the facial features of the black person might fall out of the dynamic range of the camera and be lost.

    The real issue with these AIs is that they aren’t well calibrated, meaning the output confidence should mirror how often predictions are correct. If you get a 0.3 prediction confidence, among 100 predictions 30 of them should be correct. Then any predictions lower than 90% or so should be illegal for the police to use, or something like that. Basically the model should tell you that it doesn’t have enough information and the police should appropriately act on that information.

    I mean really facial recognition should be illegal for the police to use, but that’s besides the point.

    • some_guy@lemmy.sdf.org
      link
      fedilink
      arrow-up
      6
      ·
      1 year ago

      I don’t know that facial recognition should be illegal for cops to use (though I don’t want them using it, overall), but there should be guardrails in place that prevent them from using it as anything more than “let’s look into this person further.”

      Put differently, a report of a certain model car of a certain color can tip them off to investigate someone driving such a car. It isn’t a reason to arrest that person.

    • Manas@lemdro.id
      link
      fedilink
      arrow-up
      5
      arrow-down
      2
      ·
      1 year ago

      Exactly! I don’t think any programmer would intentionally go out of their way to make it so that only the people with dark skin tones are matched from the database. It has got something to do with how it is not easy to detect facial features on a darker skin tone. The image vectors will have noisy information per pixel and the pixel intensities will be similar in some patches of the image because of the darker skin tone. But that’s just my unbiased programmer’s way of thinking. Let’s hope the world is still beautiful !We are all humans afterall

    • lntl@lemmy.mlOP
      link
      fedilink
      arrow-up
      2
      ·
      1 year ago

      Yes, there are technical challenges when implementing an AI solution such as this one. From a leadership perspective; however, arrests cannot be made on AI predictions alone. They would be best used like an anonymous tip line that leads to further investigation, but not ever directly to an arrest.