However, he found no intentional bias in Meta, either by the company as a whole or by individual employees.
The report’s authors said they found “no evidence of racial, ethnic, national or religious hostility in the government teams” and noted that Meta has “employees representing different viewpoints, nationalities, races, ethnicities and relevant religions. for this conflict “.
Rather, it found numerous cases of unintended bias that damaged the rights of Palestinian and Arabic-speaking users.
In response, Meta said it intends to implement some of the report’s recommendations, including improving its Hebrew-language “classifiers,” which help automatically remove infringing posts using artificial intelligence.
“There are no quick and overnight fixes for many of these recommendations, as BSR makes clear,” the Menlo Park, Calif.-Based company said in a blog post Thursday.
Discover the stories of your interest
“While we have already made significant changes as a result of this exercise, this process will take time, including time to understand how some of these recommendations can best be addressed and whether they are technically feasible.”
Meta, the report confirmed, also made serious mistakes in the application. For example, as Gaza raged last May, Instagram briefly banned the hashtag #AlAqsa, a reference to the Al-Aqsa Mosque in Jerusalem’s Old City, a critical point of the conflict.
Meta, which owns Instagram, later apologized, explaining that its algorithms had mistaken Islam’s third holiest site for the militant group Al-Aqsa Martyrs Brigade, an armed offshoot of the secular Fatah party.
The report echoed issues raised in internal documents by Facebook whistleblower Frances Haugen last fall, showing that the company’s problems are systemic and have long been known within Meta.
A key mistake is the lack of moderators in languages other than English, including Arabic, among the most common languages on Meta’s platforms.
For users in Gaza, Syria and other conflict-affected regions of the Middle East, the issues raised in the report are nothing new.
Israeli security agencies and watchdogs, for example, have been monitoring Facebook and bombarding it with thousands of orders to remove Palestinian accounts and posts as they try to crack down on the incitement.
“They flood our system, completely overwhelming it,” Ashraf Zeitoon, Facebook’s former head of policy for the Middle East and North Africa region, who left in 2017, told The Associated Press last year. system to make mistakes in favor of Israel “.
Israel experienced an intense spasm of violence in May 2021, with weeks of tensions in East Jerusalem escalating into an 11-day war with Hamas militants in the Gaza Strip.
Violence has spread to Israel itself, with the country experiencing the worst communal violence between Jewish and Arab citizens in years.
In an interview this week, Israel’s national police chief, Kobi Shabtai, told newspaper Yediot Ahronot that he believes social media has fueled community fighting.
He called for social media to be shut down if similar violence occurs again and said he suggested blocking social media to lower the flames last year.
“I’m talking about completely closing the nets, calming the situation on the pitch and when it’s calm, reactivating them,” he said. “We are a democratic country, but there is a limit”.
The comments caused a stir and the police issued a clarification stating that his proposal was meant for extreme cases only. Omer Barlev, the cabinet minister who oversees the police, also said Shabtai does not have the authority to impose such a ban.