6.5 million people could be wrong, like Marjorie Taylor Green
Boebert, Greene Join Fight for Fake News
In this space each week, triplecheck will share information on the climate misinformation we’re seeing online – what’s going viral, how many people are seeing it, and who’s responsible for moving it. triplecheck was initiated by the Climate Action Campaign and was created to develop innovative tools to help fight misinformation.
This week, 6.5 million people were exposed to misinformation about climate change (5/17 to 5/24). Right wing pundits and members of Congress continued to promote the usual claims that climate change is a hoax (debunked here) and that the Green New Deal is the first step towards America abandoning democracy for socialism (this is ridiculous and has been debunked here and here). There was also a significant amount of content attacking the Biden Administration for its decision to waive sanctions for Russia’s Nord Stream 2 pipeline into Germany, but refusing to allow the Keystone XL pipeline to be built in the US. Regardless of whether you agree with the Biden Administration’s decision, it had nothing to do with the Keystone XL pipeline.
In addition, Rep. Lauren Boebert, Rep. Marjorie Taylor Greene, and right wing pundit Jordan Schachtel all posted content that linked support for policies to combat climate change to the requirement that people wear masks to prevent the spread of COVID-19 in an effort to undermine support for both efforts. That approach isn’t new – right wing extremists have been promoting that argument since the beginning of the pandemic – but it’s an extremely powerful one. A person’s position on mask wearing, like their position on climate change, is considered to be a clear signal of social identity. In times of conflict, like a global pandemic, our desire to connect to a group (social scientists call it ingrouping) is even stronger. It makes us look for and share information that reinforces our status within the group – and the lower status of those outside of it (the outgroup) – even if the information isn’t true.
Once again, the question is: what can we do about it?
One possible route is to find ways to build in time for people to think about what they’re sharing before they share it. A 2017 study found that people were more likely to trust people from an outgroup if they had time to think about their decision. Putting a warning badge on content is one way to encourage people to take additional time to think about the content they’re sharing.
Another way is to uselogic based arguments – arguments that expose the flaws in someone’s reasoning, forcing them to think about the position they’re taking, instead of just telling them the information they’re sharing is wrong. Faculty at the University of Iowa have identified five main logical fallacies that are commonly found in online misinformation:
ad hominem attacks (here’s an example): bringing in negative information about an argument’s proponents to undermine their position, even if the negative information is unrelated to the actual issue;
assumptions about causation (here’s an example): assuming that because A follows B that B caused A;
false dichotomy (here’s an example): setting up an argument so there are only two choices (even when there are many different options) and then eliminating one of the choices;
straw man (here’s an example): setting up a weak version of an opponent’s argument and then knocking it down
the slippery slope (here’s an example): claiming that there is a chain reaction caused by a decision or position that will result in dire consequences without evidence for that claim.
Identifying the logical flaws in the misinformation we see online gets us one step closer to effectively rebutting them. We’ll have more on that next week!