Elon Musk is blasted over fake claims by Hamas being spread on X
Elon Musk is blasted over proliferation of fake claims being spread on X by Hamas affiliated accounts after terror group’s violent rampage into Israel as EU chief gives him 24 hours to respond
- X owner Elon Musk is under pressure to answer questions as to why Hamas has been allowed to spread propaganda on his platform following terror rampage
- Israel-Hamas LIVE: Read our live blog to stay up-to-date with the latest developments in the bloody conflict
The rapid spread of misleading claims and doctored images in the aftermath of a deadly rampage by Hamas gunmen in Israel has put the focus on Elon Musk’s X platform, which has drawn the ire of the European Union and could draw fine of up to six percent of the social media giant’s revenue.
On Tuesday, European Union Commissioner Thierry Breton warned Musk that X was spreading ‘illegal content and disinformation,’ according to a letter Breton posted on X. The EU is home to some of the strictest internet laws in the world which require platforms to fight fake content.
In his letter, Breton demanded that the company answer in 24 hours allegations that misinformation of a ‘violent and terrorist’ nature was being spread. In his post on X, Breton tagged Musk’s handle.
Musk challenged Breton’s post and responded ‘Please list the violations you allude to on X, so that the public can see them.’
X owner Elon Musk is under pressure to answer questions as to why Hamas has been allowed to spread propaganda on his platform following terror rampage
Thierry Breton, the European commissioner for the internal market, posted a letter he sent to Musk online demanding that the company answer the allegations of misinformation
Breton described the content being spread on X as ‘illegal’
Changes made by Musk earlier this year have made it more difficult to track the full scale of deception on X, the site formerly known as Twitter, social media researchers told Reuters.
Prior to Musk acquiring what was then Twitter in October 2022, the company eliminated free-to-use data tools that allowed researchers to track fake news to its source.
Without the tool, researchers now need to manually analyze thousands of links, said Ruslan Trad, a resident fellow at the Atlantic Council’s Digital Forensic Research Lab (DFRLab).
Asked for comment, an X representative said more than 500 unique Community Notes, a feature that lets users add context to potentially misleading content, have been posted about the Israeli-Palestinian conflict.
One false claim that spread on X and Meta Platform’s Facebook showed a U.S. government document edited to look like approval for $8 billion in military funds to Israel, according to a report by the Reuters Fact Check team.
A Meta spokesperson said a team of experts including Hebrew and Arabic speakers were monitoring the ‘rapidly evolving situation in real-time.’
Others include a falsely labeled video purporting to be Hamas militants with a kidnapped child, and video from a concert by American singer Bruno Mars miscaptioned as footage from an Israeli music festival that was attacked by Hamas, according to Reuters Fact Check.
In a surprise attacck on Saturday, Hamas gunmen rampaged through towns, taking captives and killing hundreds of people in the deadliest Palestinian militant attack in Israel’s history.
While disinformation has spread on all major social media platforms including Facebook and TikTok, X appeared to be the most recent to draw scrutiny from regulators.
Under Musk, X has allowed users to pay to verify their accounts and lets certain users earn a portion of ad sales under a revenue share program. The changes now offer paid accounts the incentive to spread provocative or false claims to rack up followers, said Renee DiResta, a research manager at Stanford Internet Observatory.
‘Some of these accounts (on X) appeared to have been set up recently to gain virality … and spread popular misinformation about the war,’ said Jack Brewster, enterprise editor at Newsguard, which creates reliability ratings for news websites.
Musk himself recommended that X users follow two accounts that had previously spread false claims for ‘real-time’ updates on the conflict, the Washington Post reported. The billionaire owner of the platform posted the recommendation on Sunday and later deleted the post, according to the Washington Post.
Misinformation appeared to be most prevalent on X, according to Brewster and Tamara Kharroub, deputy executive director at Arab Center Washington DC, a nonpartisan research center.
False information has also spread on messaging app Telegram and short-form video app TikTok, said DFRLab’s Trad.
A Telegram spokesperson said the company does not have the ‘power to verify information.’ TikTok did not respond to request for comment.
Social media platforms face the challenge of straddling a line between moderating content to protect users while allowing information to spread in real time, something that has also helped the news media and investigators track civilian deaths.
Towing the line is difficult even when platforms plan months in advance for planned events like elections, said Solomon Messing, a professor at New York University’s Center for Social Media and Politics who previously worked at Twitter and Facebook.
‘It’s much more difficult when there’s a surprise terrorist attack, particularly one with this much video footage,’ said Messing.
Some Community Notes on X have appeared after misleading narratives were viewed by thousands of users, Kharroub said, making them less effective at correcting false information.
X said in its post on Monday that Community Notes typically appear within minutes of content posting. The company said while it may be ‘incredibly difficult’ to see certain content, it was in the public interest to see information in real time.
A YouTube spokesperson said some violent or graphic content may be allowed if it provides sufficient news or documentary value about the conflict, adding the company prohibits content that promotes violent organizations, including video filmed by Hamas. Like other online platforms, YouTube has moderation employees and technology to remove content that violates its rules.
Snap, owner of messaging app Snapchat, said its map feature, which lets users view public posts from anywhere in the world, will remain available in the region with teams monitoring for misinformation and content that incites violence.
(Reporting by Sheila Dang in Dallas and Riniki Sanyal in Bangalore, editing by Deepa Babington)
Source: Read Full Article