American and Israeli schools are encouraging parents to have their children delete social media, anticipating that Hamas terrorists will weaponize popular apps to publicize the gruesome killings of hostages.
Schools across the United States and Israel are urging parents to ensure their children delete TikTok, Instagram, Facebook and Telegram “immediately” in anticipation of the terrorist group Hamas broadcasting videos executing hostages it has taken in its attack against Israel.
A Hamas spokesperson has warned that the organization will post murders of civilians it has captured if Israel targets people in Gaza without warning. In addition to at least 150 Israelis who were kidnapped, at least 20 missing Americans may be among the hostages taken by Hamas, the White House said Tuesday. The U.S. government has designated Hamas as a terrorist organization and several of its members as terrorists.
“Dear Parents: We have been notified that soon there will be videos sent of the hostages begging for their lives,” one school wrote in a memo to families reviewed by Forbes. “Please delete TikTok from your children’s phones. We cannot allow our children to see them.”
“It’s hard for us, it’s impossible for us, to digest all of the content displayed on various social networks,” said the note, which Forbes translated from Hebrew. A separate letter in Hebrew called on parents to delete Facebook and Telegram from their kids’ phones, in addition to TikTok.
A private school in New Jersey circulated a similar letter this week to its nearly 1,000 students and parents.
“Local psychologists have reached out to us and informed us that the Israeli government is urging parents to tell their children to delete Instagram and TikTok immediately,” the principal wrote in an email seen by Forbes. “We strongly advise our students to do the same as soon as possible. … As one Israeli psychologist noted, ‘The videos and testimonies we are currently exposed to are bigger and crueler than our souls can contain.’”
Forbes has omitted the names of the various schools for security reasons. On social media, scores of parents from Arizona, New York, Canada, the U.K. and beyond claimed they, too, had received this deletion guidance from schools.
Since Hamas waged its surprise attack on Israel on Saturday, gruesome videos of the violence have quickly gone viral on the world’s most widely-used social media platforms, testing the companies’ policies and processes aimed at preventing or removing harmful content. Graphic reels of bloodied women who’d been raped, kidnapped or killed—and whose corpses were then paraded around Gaza as soldiers sat on them and onlookers spit on them—have been easy to find across the major platforms.
News of schools blasting out letters urging families to delete popular apps prompted Senator Rick Scott to call on the platforms to remove troubling posts and accounts that “instill fear and create chaos” and prevent Hamas from monetizing them in any way. “We’ve seen reports of babies savagely beheaded. Children shot in front of their parents. The elderly dragged through the streets,” Scott said. “Now Iran-backed Hamas wants to inflict more terror by sharing videos of hostages begging for their lives in Gaza. Social media companies MUST take action. TikTok, Instagram (Meta), X and every other social media platform have an obligation to stop these terrorists from distributing posts containing violence and murder and collecting financial support for their terror operations.”
TikTok and Meta (which owns Facebook and Instagram) did not immediately respond to requests for comment about the schools’ concerns and how the companies are approaching the possibility that Hamas terrorists may try to weaponize mainstream platforms to spread these violent videos.
TikTok’s policies state that it does not allow “violent and hateful organizations or individuals” on the platform. That includes violent extremists, criminal or hate organizations and individual perpetrators of mass violence, they say. Meta also prohibits organizations and individuals “organizing or advocating for violence against civilians… or engaging in systematic criminal operations.” That includes groups designated as terrorist organizations by the U.S. government, such as Hamas, or individuals designated as terrorists. “We remove praise, substantive support, and representation of [these] entities as well as their leaders, founders, or prominent members,” Meta’s rules state.
Viral videos showing the violence aren’t the only thing politicians are concerned about on social media. Misinformation about the war, including deceptive videos about purported destruction, victims and rescues, has also been rampant on the sites. It spread as many users on the ground were turning to social media, particularly X (formerly Twitter), for real-time updates on what was unfolding outside their doors.
For example, European commissioner Thierry Breton demanded action from X on Tuesday to address “illegal content and disinformation” being spread in the EU—warning that the company’s moderation (or lack thereof) of this material could violate the bloc’s Digital Services Act. In a letter to owner Elon Musk, Breton called on X to be clearer about what’s allowed on the site when it comes to terrorist or violent content and faster to take it down. The platform has been plagued by “fake and manipulated images and facts,” including “repurposed old images of unrelated armed conflicts or military footage that actually originated from video games,” Breton wrote, creating a “risk to public security and civic discourse.” Musk replied that “our policy is that everything is open source and transparent, an approach that I know the EU supports.”
At X, the conflict is arguably the company’s highest-stakes challenge yet since Musk took over Twitter one year ago. Its safety team said this week that it has seen a surge of active users in the conflict area and that X leaders were deploying “the highest level of response” to protect discourse as the crisis intensifies. That includes removing new Hamas-affiliated accounts, monitoring for antisemitic speech and working with the industry-wide anti-terrorism body, the Global Internet Forum to Counter Terrorism, to track problematic trends. Still, its tolerance for graphic material appears to remain high.
“We know that it’s sometimes incredibly difficult to see certain content, especially in moments like the one unfolding,” the safety team’s statement said. “In these situations, X believes that, while difficult, it’s in the public’s interest to understand what’s happening in real time.”
Rina Torchinsky contributed reporting.
Read the full article here