Washington, Oct 18 (Agency) The artificial intelligence algorithm used by Facebook to find and remove publications inciting hatred and enmity has proved ineffective as it deletes less than 5 percent of such posts, the Wall Street Journal newspaper said, citing internal documents of the company. Frances Haugen, a former employee at the company and whistleblower, has claimed that Facebook’s algorithms fail to adequately identify hateful content, and the lack of workers prevents it from eliminating the detected issues.
According to the estimates mentioned in the documents, artificial intelligence removes messages that generate only 2 percent of hate speech on the platform. In March of this year, a group of experts came to a similar conclusion, claiming that Facebook AI removed from 3-5 percent of hate-inciting publications and 0.6 percent of content that violates Facebook rules on violence, while accounts displaying such publications remain active.
Facebook CEO Mark Zuckerberg has denied Haugen’s claims about the company’s knowledge of alleged harm to the mental health of teenagers from its content, in a fight with which it did not put much effort.