Was this AI trained on an unbalanced data set? (Only black folks?) Or has it only been used to identify photos of black people? I have so many questions: some technical, some on media sensationalism

  • Yoruio@lemmy.ca
    link
    fedilink
    arrow-up
    60
    arrow-down
    1
    ·
    1 year ago

    Was this AI trained on an unbalanced dataset (only black folks?)

    It’s probably the opposite. the AI was likely trained on a dataset of mostly white people, and thus more easily able to distinguish between white people.

    It’s a problem in ML that has been seen before, especially for companies based in the US where it is just easier to find a large amount of white people as opposed to people of other skin colors.

    It’s really not dissimilar to how people work either, humans are generally more able to distinguish between two people who are races that they grew up with. You’ll make more mistakes when trying to identify people of races you aren’t as familiar with too.

    The problem is when the police use these tools as an authoritative matching algorithm.

    • LetterboxPancake@sh.itjust.works
      link
      fedilink
      Deutsch
      arrow-up
      11
      ·
      1 year ago

      It’s not only growing up with them. We’re just better identifying people/animals/things we’re familiar with. Horses all look the same if you’re not around them regularly. You can distinguish colours, but that’s it.

      Not comparing people to horses by the way…

        • some_guy@lemmy.sdf.org
          link
          fedilink
          arrow-up
          2
          arrow-down
          3
          ·
          1 year ago

          I checked your name to see if you were a contrarian from another thread. You aren’t.

          Then, I thought about the name you chose. Did you mean to spell Dessert (the treat after a meal) or was that a misspelling? Then, I considered that regardless of the intent in spelling, your name appears to refer to a war (Desert Storm: USA vs Iraq in the 90s). Even if it’s playful (Desserts Storming me, yum!), I dunno. At this point, I don’t suspect we align in ideologies. I’ll stop analyzing here.

    • lntl@lemmy.mlOP
      link
      fedilink
      arrow-up
      4
      arrow-down
      1
      ·
      1 year ago

      I thought they would have trained it on mugshots. Either way, it should never be used to make direct arrests. I feel like it’s best use would be something like an anonymous tip line that leads to investigation.

      • Yoruio@lemmy.ca
        link
        fedilink
        arrow-up
        3
        ·
        1 year ago

        Using mugshots to train AI without consent feels illegal. Plus, it wouldn’t even make a very good training set, as the AI would only be able to identify perfectly straight images shot in ideal lighting conditions.

    • gramathy@lemmy.ml
      link
      fedilink
      arrow-up
      2
      ·
      1 year ago

      Also makes me wonder if our defined digital color spaces being bad at representing darker shades contributes as well.

  • DavidGarcia@feddit.nl
    link
    fedilink
    arrow-up
    32
    ·
    1 year ago

    Putting any other issues aside for a moment, I’m not saying they’re not true also. Cameras need light to make photos, the more light they get, the better the image quality. Just look at astronomy, we don’t find the dark astetoids/planets/stars first, we find the ones that are the brightest and we know more about them than about a planet with lower albedo/light intensity. So it is literally physically harder to collect information about anything black, that includes black people. If you have a person with a skin albedo of 0.2 vs one with 0.6, you get 3x less information in the same amount of time all things being equal.

    And also consider that cameras have a limited dyanmic range and white skin might often be much closer to most objects around us than black skin. So if the facial features of the black person might fall out of the dynamic range of the camera and be lost.

    The real issue with these AIs is that they aren’t well calibrated, meaning the output confidence should mirror how often predictions are correct. If you get a 0.3 prediction confidence, among 100 predictions 30 of them should be correct. Then any predictions lower than 90% or so should be illegal for the police to use, or something like that. Basically the model should tell you that it doesn’t have enough information and the police should appropriately act on that information.

    I mean really facial recognition should be illegal for the police to use, but that’s besides the point.

    • some_guy@lemmy.sdf.org
      link
      fedilink
      arrow-up
      6
      ·
      1 year ago

      I don’t know that facial recognition should be illegal for cops to use (though I don’t want them using it, overall), but there should be guardrails in place that prevent them from using it as anything more than “let’s look into this person further.”

      Put differently, a report of a certain model car of a certain color can tip them off to investigate someone driving such a car. It isn’t a reason to arrest that person.

    • Manas@lemdro.id
      link
      fedilink
      arrow-up
      5
      arrow-down
      2
      ·
      1 year ago

      Exactly! I don’t think any programmer would intentionally go out of their way to make it so that only the people with dark skin tones are matched from the database. It has got something to do with how it is not easy to detect facial features on a darker skin tone. The image vectors will have noisy information per pixel and the pixel intensities will be similar in some patches of the image because of the darker skin tone. But that’s just my unbiased programmer’s way of thinking. Let’s hope the world is still beautiful !We are all humans afterall

    • lntl@lemmy.mlOP
      link
      fedilink
      arrow-up
      2
      ·
      1 year ago

      Yes, there are technical challenges when implementing an AI solution such as this one. From a leadership perspective; however, arrests cannot be made on AI predictions alone. They would be best used like an anonymous tip line that leads to further investigation, but not ever directly to an arrest.

  • mohKohn@kbin.social
    link
    fedilink
    arrow-up
    23
    arrow-down
    1
    ·
    1 year ago

    12 people. we’re talking about 12 people, so any conclusions are suspect. that being said, facial recognition struggling with black faces from insufficient data is an extremely common problem, so it’d be unsurprising

    • lntl@lemmy.mlOP
      link
      fedilink
      arrow-up
      4
      ·
      1 year ago

      That’s exactly my idea on media sensationalism. It’s really not a large sample. Way more people have been arrested and imprisoned by the justice system without any AI involvement.

    • lntl@lemmy.mlOP
      link
      fedilink
      arrow-up
      2
      ·
      1 year ago

      Then the title would read:

      In every reported case where police mistakenly arrested someone, that person has been Black

      Yeah, that could be the case

  • DessertStorms@kbin.social
    link
    fedilink
    arrow-up
    22
    arrow-down
    2
    ·
    1 year ago

    It’s amazing how hard some people will work to deny that demonstrable biases influenced by the society we live in, exist in and massively impact science and technology, as if they are above such things, while literally demonstrating their own biases.

    • hh93@lemm.ee
      link
      fedilink
      arrow-up
      7
      arrow-down
      1
      ·
      1 year ago

      I always wonder if the people that are so hard against systemic/structural racism are really thinking that they are being oppressed if someone tries to address that or if they are fully aware of the advantages they have just because they are born with the “right” skincolor in the right neighbourhoods and are against it for purely egoistic reasons because they don’t want to lose that advantage

      • possibly a cat@lemmy.ml
        link
        fedilink
        arrow-up
        2
        ·
        1 year ago

        I’m used to saying follow the money, but this time, try following the thought process. Maybe sub in less offensive and bigoted ideas, though, for the sake of your mental health.

        You hear an issue brought up on the only news channel that will tell it to you straight. Foreign SF commandos are going undercover in our schools’ dressing rooms to steal slaves for satanic rituals used to summon evil aliens or something, right? Naturally you have a strong reaction. The next time that issue comes up, it is delivered with a more realistic framing by an entirely different party - say, people trying to stop violence against trans populations - but that’s not how you heard the story, and now it sounds like these people are apologizing for the evil aliens compared to what you’ve heard!

        Or, you have a small problem. You’re from a small post-industrial town and everyone is dying because the corporations exploited your community, before shipping the jobs to practical slaves overseas and leaving behind the local labor/economy. Now everyone among your friends and family has high cancer rates and is dying in their 40s. Something can’t be right. There has to be another way… but you’ve never heard of it, and you can picture it clearly on your own. This is capitalist realism - the inability to imagine beyond the current configuration. In your mental turmoil you stumble upon a meme or a soundbite: You criticize capitalism, and yet you exist in it - funny! Not funny at all, but how do you even come back against something that dense? Well you don’t really have any ideas because all you’ve ever known capitalism, and it sounds like a great thing, so critics of it must be dumb. Now whenever people critique capitalism, the words pop in your head: “Yet you exist in it!” The gotcha trigger has been programmed in your mind.

        This kind of programming is omnipresent in our society - it’s ancient, yet honed in modernity.

        Sometimes people buy into the platform 100% and start to be able to read in-between the lines. They learn to dog-whistle all on their own for the sake of the tribalist narrative. They are the vanguard, making these ideas more visible and approachable. These are the ones that go straight from ‘ban trans athletes just because athlete thing,’ and then once trans athletes are banned they move on to making identifying as trans in school illegal without a moment of hesitation, and after that proceed straight to making it a crime just to exist as a trans adult. These people don’t care about the athletics, just like they don’t care about children when they push pro-corporate/anti-privacy legislation. They were driven by fear to normalize the party line, until the party line overpowered the fear. The pressure for this usually comes from friends, family, coworkers, social media, and one-way media choices.

        The next level are the pro/semi-pro influencers that receive training (originally lobbyists but now they lobby more than just politicians!) to support astroturf movements. They usually don’t care what they’re asked to say because they get spoiled to do so. The funding and planning of these activities are often done by think tanks, and the think tanks are often a link directly to the oligarch ring leaders. (If you aren’t at least being flown around and wined and dined by ‘handlers’ then you’re just an entry level chump in the big picture.)

      • DessertStorms@kbin.social
        link
        fedilink
        arrow-up
        2
        arrow-down
        2
        ·
        1 year ago

        Both and/or either?
        It’s a complex and deep topic that is also inextricably intertwined with capitalism and other oppressive systems, so giving an in depth answer would take more brain power than I currently have available lol (I have tried, but I keep going off on tangents and getting tangled up in explanations), but I think this quote is quite relevant, so I might leave it at that:

        “If you can convince the lowest white man he’s better than the best colored man, he won’t notice you’re picking his pocket. Hell, give him somebody to look down on, and he’ll empty his pockets for you.” -Lyndon B. Johnson

    • lntl@lemmy.mlOP
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      demonstrable biases influenced by the society we live in, exist in and massively impact science and technology,

      And then science and technology fortifies biases of society, it’s a feedback loop

  • EinfachUnersetzlich@lemm.ee
    link
    fedilink
    arrow-up
    15
    arrow-down
    1
    ·
    edit-2
    1 year ago

    Just a note that it appears this is USA only. No comment on differences around the world (if the technology is used elsewhere).

    • buwho@lemmy.ml
      link
      fedilink
      English
      arrow-up
      20
      ·
      1 year ago

      In the documentary Coded Bias, the police in England were utilizing new AI facial recognition technology to find criminals on the streets and yes they were almost all black. The documentary states that it is because the models were trained on mostly white people so it couldn’t differentiate black peoples features as well. Or something to that affect.

    • lntl@lemmy.mlOP
      link
      fedilink
      arrow-up
      2
      ·
      1 year ago

      I imagine that in China the headline would read:

      In every reported case where police mistakenly arrested someone using facial recognition, that person has been Chinese

  • nieceandtows@programming.dev
    link
    fedilink
    arrow-up
    15
    arrow-down
    3
    ·
    1 year ago

    I don’t think this is some systematic racism. Rather, it’s the technology itself that’s lacking. I remember even those motion activated bathroom sinks had problem working well with black hands. I think they’re just not good enough at differentiating between darkness and black skin.

      • nieceandtows@programming.dev
        link
        fedilink
        arrow-up
        3
        arrow-down
        2
        ·
        1 year ago

        Just because it can be doesn’t mean that’s the first place we go to. Stupidity is more likely than maliciousness and all that.

        • Fish Id Wardrobe@mastodon.me.uk
          link
          fedilink
          arrow-up
          9
          arrow-down
          1
          ·
          1 year ago

          @nieceandtows ::gestures at mankind’s enormously long and complex history of systemic racism::

          People are going to read your post and assume that you are apologising for the police *because* they are the police. You know, one of those “they must have done something wrong or the police wouldn’t have arrested them” people.

          I’m sure you’re not one of those. Just saying how it looks.

          • nieceandtows@programming.dev
            link
            fedilink
            arrow-up
            1
            arrow-down
            1
            ·
            1 year ago

            Eh. I have a loved one with some undiagnosed mental health issues, and it’s a constant struggle because they always assume the worst of anyone and everyone. Watching/living with them, I’ve learned that it’s always better to assume good about people than assuming bad about them. Assuming bad things without proof only ever ruins your happiness and relationships. People can read my comments and understand what I’m saying like you do. If they don’t and assume I’m racist, it only proves my point.

              • nieceandtows@programming.dev
                link
                fedilink
                arrow-up
                1
                arrow-down
                2
                ·
                1 year ago

                Not talking about systemic racism in general. I know there’s a lot of that. I’m talking about systemic racism causing this particular issue. I’m saying because there have been cases of motion sensors not detecting black hands because of technical issues. I’m not apologizing for anyone, just pointing out the fact that it has happened before due to technical deficiencies.

                • Fish Id Wardrobe@mastodon.me.uk
                  link
                  fedilink
                  arrow-up
                  3
                  arrow-down
                  1
                  ·
                  1 year ago

                  @nieceandtows The fact that there have been issues with sensors (which is true) does not disprove systemic racism (which exists). That’s like saying that because I put vinegar in the dressing the lemon juice wasn’t sour. It doesn’t follow.

    • vrighter@discuss.tchncs.de
      link
      fedilink
      arrow-up
      9
      ·
      1 year ago

      haha this is reminding me of an episode of Better off Ted, where they replaced all sensors with optical based ones that did not recognize black people. Their solution was to hire white guys to follow them around to open doors and turn on lights for them

    • cobra89@beehaw.org
      link
      fedilink
      arrow-up
      8
      ·
      edit-2
      1 year ago

      IMO, the fact that the models aren’t accurate with people of color but they’re putting the AI to use for them anyway is the systemic racism. If the AI were not good at identifying white people do we really think it would be in active use for arresting people?

      It’s not the fact that the technology is much much worse at identifying people of color that is the issue, it’s the fact that it’s being used anyway despite it.

      And if you say "oh, they’re just being stupid and didn’t realize it’s doing that " then it’s egregious that they didn’t even check for that.

      • nieceandtows@programming.dev
        link
        fedilink
        arrow-up
        2
        ·
        1 year ago

        That part I can agree with. These issues should have been fixed before it was rolled out. The fact that they don’t care is very telling.

    • DessertStorms@kbin.social
      link
      fedilink
      arrow-up
      8
      arrow-down
      1
      ·
      1 year ago

      it’s the technology itself that’s lacking.

      the technology is designed by people, people who didn’t consider those with dark skin and so designed a technology that is lacking.
      Lets not act as if technology just springs spontaneously in to being.

    • lntl@lemmy.mlOP
      link
      fedilink
      arrow-up
      4
      arrow-down
      1
      ·
      1 year ago

      I think it is come systematic racism. AI didn’t arrest this person, police officers did. They did no further investigation before making the arrest because they didn’t have to: the person has black skin. Case closed.

  • nxfsi@lemmy.world
    link
    fedilink
    arrow-up
    13
    arrow-down
    1
    ·
    1 year ago

    Reminder that Google still hasn’t actually fixed the issue in which their image recognition algorithm mislabels black people as gorillas

      • Bibliotectress@lemmy.world
        link
        fedilink
        arrow-up
        3
        ·
        1 year ago

        I watch it on Hulu, in case you have that. The whole show is a joke about corporate America and how companies will do anything for a profit. It’s so good. There are only 2 seasons.

    • superkret@feddit.de
      link
      fedilink
      arrow-up
      5
      ·
      1 year ago

      The question here is: Why can’t the software interpreting the images discern black people as well?

      • Rentlar@lemmy.ca
        link
        fedilink
        arrow-up
        3
        ·
        1 year ago

        Parent commentor is being downvoted… but with a camera tuned for outdoor visibility I can see why it’s difficult to detect features on a person with darker skin…

        Similar how it’s so hard to take good pictures of this little guy:

        black cat in cardboard box

    • lntl@lemmy.mlOP
      link
      fedilink
      arrow-up
      2
      ·
      1 year ago

      Nothing more? A pregnant lady was arrested and put in jail. If this AI “can’t discern” then it shouldn’t be used to make arrests.