The social media platform Facebook started testing a feature that asks users in the U.S if they are worried that somebody they know is becoming an “extremist.” The pop-up notification redirects users to a support page that explains how the test is part of their larger works to provide resources and support to people who have been exposed to extremist content.
The Big Tech Giant says that the project is being worked on with non-governmental organizations and academic experts. But this is not a way to help people “exposed to extremists.” It’s a way to target people with political perspectives that Facebook doesn’t like and then censor them. They’ve been doing it all year.
RedState Kira Davis was one of the first social media users to post about Facebook’s new notifications, asking if anyone else has had this “extremist” message pop up on their Facebook. She said that her friend got the message twice and that he is very disturbed.
The message she received from Facebook stated, “Are you concerned that someone you know is becoming an extremist? We care about preventing extremism on Facebook. Others in your situation have received confidential support.” There is a “get support” button users can click on to hear stories and get advice from people who’ve escaped violent extremist groups.
Additional people responded that they’ve received screenshots of a similar message. “Yes…actually, I have a real concern that some leftist technocrats are creating an Orwellian environment where people are being arbitrarily silenced or banned for saying something the ‘thought ’ doesn’t like,” Virginia Delegate Nick Freitas tweeted.
Writer Alex Berenson pointed out the “confidential help” page, asking if they are a publisher and political platform or a free-range social media site. If they want to come off as a political platform, then they’ll have to be legally liable for every bit of content they host.
Other users were receiving messages with a warning claiming that they “may have been exposed to harmful extremist content recently.” The message states that violent groups try to manipulate your anger and disappointment, adding that you can “take action now” to protect yourself and others.
Under the “harmful extremist content” warning, Facebook gives examples of arguments from violent groups. The examples read different arguments and then how Facebook “debunks” each argument. Examples include sentences like: “Violence is the only way to achieve change” and “minorities are destroying the country.” The same support link is also provided to take users to a “Life After Hate” organization page.
Life After Hate describes itself as a nonprofit that is committed to helping people leave the “violent far-right” to connect with humanity and lead “compassionate lives.” The group also claims that “far-right extremism and white supremacy are the greatest domestic terror threats facing the United States” today. Mind you, Life After Hate received a $400,000 federal grant from the Obama administration, but had it rescinded under the Trump Administration. Former quarterback Colin Kaepernick even donated $50,000 to the organization in 2017. Talk about political bias.
It’s a big question to ask Facebook why they are partnering with a group that focuses on far-right violence and how they determine what kind of content is “harmful” and “extremist.” When the Big Tech company was asked about surveilling and combating left-wing extremism, they wouldn’t comment. Facebook wouldn’t even say if Black Lives Matter falls under “extremism” and if they would censor BLM-related content. According to data collected by the Armed Conflict Location and Event Data Project (ACLED), BLM made up 95% of the 2020 U.S riots.
Facebook said that they would continue to partner with other organizations that are committed to preventing terrorist and violent extremists from exploiting their platforms. A Facebook spokesperson said they would continue to work with expert academic and NGO partners to share this knowledge through GIFT.
GIFT, the Global Internet Forum to Counter Terrorism, develops surveillance technology to address violent extremist content online. But people are pointing out that Facebook is identifying “extremists” as conservatives at this point. One of the primetime examples is Facebook’s continued ban on former president Donald Trump, with Facebook’s global affairs vice president saying that he is a “risk to public safety.”
The new social justice movement sees disagreements as violence and it makes them feel “unsafe” that someone might have a different opinion than theirs. This started with Trump’s Twitter ban and only gets worse by the day.
The radical left and Big Tech are getting too drunk with their power to control speech and sooner or later people will leave Facebook for something that protects their First Amendment. It’s not a free speech site, it’s a political environment for the woke.