Jump to content

US Uses Racist Robots Against Black Defendants In Court

54 posts in this topic

Recommended Posts

Adonis
Quote

In May last year, a stunning report claimed that a computer program used by a US court for risk assessment was biased against black prisoners. The program, Correctional Offender Management Profiling for Alternative Sanctions (Compas), was much more prone to mistakenly label black defendants as likely to reoffend – wrongly flagging them at almost twice the rate as white people (45% to 24%), according to the investigative journalism organisation ProPublica.

Compas and programs similar to it were in use in hundreds of courts across the US, potentially informing the decisions of judges and other officials. The message seemed clear: the US justice system, reviled for its racial bias, had turned to technology for help, only to find that the algorithms had a racial bias too.

 

The Guardian: Rise of Racist Robots

Share this post


Link to post
Share on other sites
princedonte

This is disgusting and wrong. The court system needs massive reforms.

Share this post


Link to post
Share on other sites
Adonis
4 minutes ago, princedonte said:

This is disgusting and wrong. The court system needs massive reforms.

I'm going to have to get into ML/AI so I can correct this. 

Share this post


Link to post
Share on other sites
Adonis

I guess ATRL will care about this when robots are used to lock them up unfairly....or when AI is negatively biased against them and it affects them directly like denying them jobs or something.

Share this post


Link to post
Share on other sites
robotdanger

I was just listening to a podcast about this. I have a feeling AI will bring a lot of problems that we haven’t thought of in the very near future..

Share this post


Link to post
Share on other sites
Yndda

jeez i just saw this. how gross… the justice system is broken

Share this post


Link to post
Share on other sites
JonginBey

The lengths racists go to are pathetic and embarrassing.

Share this post


Link to post
Share on other sites
BlooRiverAnimal

This is giving me sentinel teas. :biblio: 

Share this post


Link to post
Share on other sites
Communion

But ATRL told me technology fields don't need diversity and that these problems would never arise. :cm:

Share this post


Link to post
Share on other sites
Dessy

Technology is only as good as the people who make them :cm: 

Share this post


Link to post
Share on other sites
Whale Tongue
20 minutes ago, Communion said:

But ATRL told me technology fields don't need diversity and that these problems would never arise. :cm:

The way this was designed is not racist. The program uses data on financial position, zip code, and occupation. The program is biased because of racism. Interesting limitation of AI! 

Share this post


Link to post
Share on other sites
Adonis
13 minutes ago, Dessy said:

Technology is only as good as the people who make them :cm: 

This is exactly what several of my friends said who are also engineers some of them are really talented people....I assume AI is using bad information to make it's predictions about black people....or its just a reflection of how the world views black people. 

Share this post


Link to post
Share on other sites
Archetype
10 minutes ago, Adonis said:

This is exactly what several of my friends said who are also engineers some of them are really talented people....I assume AI is using bad information to make it's predictions about black people....or its just a reflection of how the world views black people. 

Well, that's also a data science issue.  The data is simply showing trends, but the intention behind these studies could be racist or just targeting POC overall.  It's good that they realized something was off about their system.

Share this post


Link to post
Share on other sites
Communion
20 minutes ago, Whale Tongue said:

The way this was designed is not racist. The program uses data on financial position, zip code, and occupation. The program is biased because of racism. Interesting limitation of AI! 

And my point being that people more likely to experience racism are more likely to think about these things as possible risks of tools in practice.

 

Even with something like hand-motion soap dispensers. It's not designed with racism in mind, the field is just so homogenized by one type of person, that no one in the room seems to think "does the censor work well with lighting to detect all shades of skin?". Could white designers think of that? If they were good at their job, of course, but they're just... not. There's an efficiency gap happening where technology is aimed to be most efficient in only certain ways, with other ways not thought about by people it doesn't impact.

 

No one in the room stopped to think: "will this tool not account for racial biases that impact the data it's assessing?" And so one has to wonder - why?

Share this post


Link to post
Share on other sites
Javan
2 minutes ago, Communion said:

And my point being that people more likely to experience racism are more likely to think about these things as possible risks of tools in practice.

 

Even with something like hand-motion soap dispensers. It's not designed with racism in mind, the field is just so homogenized by one type of person, that no one in the room seems to think "does the censor work well with lighting to detect all shades of skin?". Could white designers think of that? If they were good at their job, of course, but they're just... not. There's an efficiency gap happening where technology is aimed to be most efficient in only certain ways, with other ways not thought about by people it doesn't impact.

 

No one in the room stopped to think: "will this tool not account for racial biases that impact the data it's assessing?" And so one has to wonder - why?

And the development of Kodak and pretty much all cameras :celestial5:

Share this post


Link to post
Share on other sites
priest

shouldnt be using robots to determine prison sentences in the first place. thats just stupid.

 

that being said tho, we black and brown people are probably legitimately more likely to re-offend. if we have a criminal record we are way less likely to get any opportunities offered to us to rebuild our lives. lack of legal opportunities we will be more likely to take chances with illegal means of supporting ourselves.

 

the robot isnt racist, the people using them are. the bot is just honestly evaluating data probably but that isnt how things should be done at all. 

Share this post


Link to post
Share on other sites
Whale Tongue
7 minutes ago, Communion said:

And my point being that people more likely to experience racism are more likely to think about these things as possible risks of tools in practice.

 

Even with something like hand-motion soap dispensers. It's not designed with racism in mind, the field is just so homogenized by one type of person, that no one in the room seems to think "does the censor work well with lighting to detect all shades of skin?". Could white designers think of that? If they were good at their job, of course, but they're just... not. There's an efficiency gap happening where technology is aimed to be most efficient in only certain ways, with other ways not thought about by people it doesn't impact.

 

No one in the room stopped to think: "will this tool not account for racial biases that impact the data it's assessing?" And so one has to wonder - why?

Actually most AI-using firms have ethics/legal committees to deal with this stuff 

 

ftr im not against diversity, i just don’t think it would have prevented this 

Share this post


Link to post
Share on other sites
Adonis
24 minutes ago, Whale Tongue said:

Actually most AI-using firms have ethics/legal committees to deal with this stuff 

 

ftr im not against diversity, i just don’t think it would have prevented this 

And I think it can prevent this. Red flags would've been thrown up earlier. The error rate was twice that of whites. I purposefully highlighted that part and made it big letters because I knew people would overlook this and you still overlooked the most important part.

 

2 hours ago, Adonis said:

wrongly flagging them at almost twice the rate as white people (45% to 24%),

The data shows that AI is wrong almost 50% of the time. That means you are literally gambling with people's lives. It also shows that white people are really only able to flag the behavior of other white people most of the time and even then there are significant errors. 

Share this post


Link to post
Share on other sites
Whale Tongue
2 minutes ago, Adonis said:

And I think it can prevent this. Red flags would've been thrown up earlier. The error rate was twice that of whites. I purposefully highlighted that part and made it big letters because I knew people would overlook this. 

 

 

¯\_(ツ)_/¯ Agree to disagree. The methods of algorithm development are fairly well defined at this point. An asian or black computer scientist would produce a similar product as they would have to deal with the same dataset. 

Share this post


Link to post
Share on other sites
Dessy
30 minutes ago, Javan said:

And the development of Kodak and pretty much all cameras :celestial5:

Oops at tea.

 

 

Quote

As mentioned earlier, in film photography, color balance has a lot to do with the chemical composition of the film. For many decades, color film in the United States was calibrated to highlight Caucasian skin tones. This was the most fundamental problem. With an unusual degree of skill and attention, a photographer could compensate for the biases in most stages of production. But there was nothing they could do about the film’s color balance. When the famous New Wave filmmaker Jean Luc Godard was commissioned to make a film about Mozambique, he reportedly refused to use Kodachrome film -- the most popular color film at the time. He complained the film, developed for a predominantly white market, was “racist.”

---

The weirdest thing was that for a long time, Roth says, nobody complained. Many photographers were as aware as Godard was of the limitations of their film in shooting darker-skinned subjects, but unlike Godard they found fault in their themselves rather than in their materials. When shooting dark subjects they went to extreme lengths to extend their technique.

 

“I remember growing up and seeing Sidney Poitier sweating next to Rod Steiger in ‘In the Heat of the Night,’ and obviously [that was because] it’s very hot in the South,” Oscar-winning director Steve McQueen told the Washington Post, “But also he was sweating because he had tons of light thrown on him, because the film stock wasn’t sensitive enough for black skin.

https://priceonomics.com/how-photography-was-optimized-for-white-skin/

 

Share this post


Link to post
Share on other sites
Dessy
3 minutes ago, Whale Tongue said:

¯\_(ツ)_/¯ Agree to disagree. The methods of algorithm development are fairly well defined at this point. An asian or black computer scientist would produce a similar product as they would have to deal with the same dataset. 

Why would they have to use the exact same dataset? They should actually try to compensate for such obvious racial disparity. 

Share this post


Link to post
Share on other sites
Adonis
12 minutes ago, Whale Tongue said:

¯\_(ツ)_/¯ Agree to disagree. The methods of algorithm development are fairly well defined at this point. An asian or black computer scientist would produce a similar product as they would have to deal with the same dataset. 

No they wouldn't because blacks and asians would be able to consider the data, information and other things about their race that matter to them that don't necessarily matter to white people. That's the problem with your own bias. It's clouding your judgement. You can disagree but the bottom line is the error rates are still twice that of white people. Financial status and other factors used in this test is better indicator of recidivism among white people. The same isn't true of black people and potentially other minorities. One size doesn't fit all. It's also not something you should want to do sentencing based on. 

 

I'm in the field of engineering. I myself am one step away from being an AI engineer myself especially considering the fact that the last two jobs I had were directly related to predicting consumer behavior. 

Share this post


Link to post
Share on other sites
VOSS

Crazy and disgusting how little testing & regulation there is with this sort of stuff

Share this post


Link to post
Share on other sites
Whale Tongue
3 minutes ago, Adonis said:

No they wouldn't because they would be able to consider things about their race that matter to blacks and asians that don't necessarily matter to white people. That's the problem with your own bias. It's clouding your judgement. You can disagree but the bottom line is the error rates are still twice that of white people. Financial status and other factors used in this test is better indicator of recidivism among white people. The same isn't true of black people and potentially other minorities. I'm in the field of engineering. I myself am one step away from being an AI engineer myself especially considering the fact that the last two jobs I had were directly related to predicting consumer behavior. 

I’m also an engineer! The accuracy suffers DRAMATICALLY when you don’t use factors like poverty, joblessness and other already racialized factors. Do you really believe that an asian or black engineer would produce a different algorithm than a white engineer? Having worked in the industry myself, the amount of oversight removes a lot of the independent input. Essentially I don’t think it would make a difference because of office politics + the need to maximize performance. 

Share this post


Link to post
Share on other sites