According to New York Times, Facebook recently apologized after its Artificial Intelligence software labeled Black men “primates” in a video featured on the social media network.
The video, posted by The Daily Mail on June 27, 2020, shows clips of Black men and police officers. The automatic prompt asked users if they would like to “keep seeing videos about Primates,” despite the video clearly feature no connection or content related to primates.
“As we have said, while we have made improvements to our A.I., we know it’s not perfect, and we have more progress to make,” Facebook said in a statement to The New York Times. “We apologize to anyone who may have seen these offensive recommendations.”
A former content designer at Facebook flagged the issue after a friend forwarded a screenshot of the prompt. A Facebook spokesperson told the publication that it was a clearly unacceptable error, and said the recommendation software involved had been disabled and the company would look “into the root cause.
“We disabled the entire topic recommendation feature as soon as we realized this was happening so we could investigate the cause and prevent this from happening again,” the spokesperson said.
Technology companies have dealt with similar issues in the past, with some critics claiming facial recognition technology is biased against people of color. Technology companies including Twitter and Google have come under fire in the past for possible biases within their artificial intelligence software.