AI and Implicit Bias

Last weekend, AOC sounded the alarm about new research that found the facial recognition software Amazon is selling to law enforcement falls short on tests for accuracy and bias. According to the Washington Post’s reporting, researchers said Amazon’s algorithms misidentified the gender of darker-skinned women in about 30 percent of their tests. (Of course, Amazon promises that the facial recognition software in use is not the one tested by researchers.)

The problem stems from the sets of photos the algorithms were trained on — which skew heavily toward white men, the researchers said. And that caused AOC to sound the alarm on Twitter.

And if you’re really behind on implicit bias, please visit Harvard’s Project Implicit to learn more.

Why It’s Hot:

  1. For possibly the first time, Congress has a credible authority on technology and she’s on the House Oversight Committee so tech companies might want to take notice.
  2. As AI becomes real, we need to make sure we’re designing for each.

Source: Washington Post