Combating “Skepticism”: Federal Grant Funds New Effort to Combat “Misinformation”
All Global Research articles can be read in 51 languages by activating the Translate This Article button below the author’s name.
To receive Global Research’s Daily Newsletter (selected articles), click here.
Follow us on Instagram and Twitter and subscribe to our Telegram Channel. Feel free to repost and share widely Global Research articles.
***
We have been discussing a comprehensive effort by the Biden Administration to blacklist or censor citizens accused of “disinformation” or “misinformation.” This effort includes dozens of FBI agents and other agency employees who worked with social media companies to bar or suspend accounts. It also included grants to academic and third party organizations to create blacklists or pressure advertisers to withdrew support for conservative sites. Now, another such grant through the National Science Foundation has been identified, which gave millions to professors to develop a misinformation fact-checking tool called “Course Correct.” The tool will help fight “skepticism” and reinforce “trust” in what the government and the programmers define as true or reliable viewpoints.
The National Science Foundation reportedly awarded grants in 2021 and 2022 for more than $5.7 million for the development of Course Correct to allow media and government officials to target misinformation on topics such as U.S. elections and COVID-19 vaccine hesitancy. In addition, a Coronavirus Aid, Relief and Economic Security Act-funded NSF grant supported the application of Course Correct to mental health issues.
The system would use machine learning and other means to identify social media posts pertaining to electoral skepticism and vaccine hesitancy, including flagging at-risk online communities for intervention. Sound familiar?
This is very similar to the effort on the other grants through offices like the State Department’s Global Engagement Center and the National Endowment for Democracy.
Democrats have opposed efforts to investigate the full scope of censorship and blacklisting efforts by the federal government. However, it appears that there are a wide array of such grants targeting free speech under the guise of combating what researchers view as “disinformation” or “misinformation.” Those words are usually ill-defined and have repeatedly been found to shield bias on the part of the researchers.
In the case of the the British-based Global Disinformation Index (GDI), the results were the targeting of ten conservative and libertarian sites as the most dangerous sources of disinformation. It then sought to persuade advertisers to withdraw support for those sites, while listing their most liberal counterparts as among the most trustworthy.
The latest grant is being conducted by Michael Wagner of the University of Wisconsin-Madison’s School of Journalism and Mass Communication, Sijia Yang of the University of Wisconsin-Madison School of Journalism and Mass Communication, Porismita Borah of Washington State University’s Edward R. Murrow College of Communication, Srijan Kumar of Georgia Tech’s College of Computing, and Munmun De Choudhury of Georgia Tech’s School of Interactive Computing.
The grant abstract echoes the earlier work in warning that social media serves “as a major source of delegitimizing information about elections and vaccines, with networks of users actively sowing doubts about election integrity and vaccine efficacy, fueling the spread of misinformation.”
Of course, many of the scientists and groups who were previously suspended for disinformation in these areas were ultimately vindicated. The mask mandate and other pandemic measures like the closing of schools are now cited as fueling emotional and developmental problems in children. The closing of schools and businesses was challenged by some critics as unnecessary. Many of those critics were also censored. It now appears that they may have been right. Many countries did not close schools and did not experience increases in Covid. However, we are now facing alarming drops in testing scores and alarming rises in medical illness among the young.
The point is only that there were countervailing indicators on mask efficacy and a basis to question the mandates. Yet, there was no real debate because of the censorship supported by many Democratic leaders in social media. To question such mandates was declared a public health threat. The head of the World Health Organization even supported censorship to combat what he called an “infodemic.”
A lawsuit was filed by Missouri and Louisiana and joined by leading experts, including Drs. Jayanta Bhattacharya (Stanford University) and Martin Kulldorff (Harvard University). Bhattacharya previously objected to the suspension of Dr. Clare Craig after she raised concerns about Pfizer trial documents. Those doctors were the co-authors of the Great Barrington Declaration, which advocated for a more focused Covid response that targeted the most vulnerable population rather than widespread lockdowns and mandates. Many are now questioning the efficacy and cost of the massive lockdown as well as the real value of masks or the rejection of natural immunities as an alternative to vaccination. Yet, these experts and others were attacked for such views just a year ago. Some found themselves censored on social media for challenging claims of Dr. Fauci and others.
The media has quietly acknowledged the science questioning mask efficacy and school closures without addressing its own role in attacking those who raised these objections. Even raising the lab theory on the origin of Covid 19 (a theory now treated as plausible) was denounced as a conspiracy theory. The science and health reporter for the New York Times, Apoorva Mandavilli, even denounced the theory as “racist.” In the meantime, California has moved to potentially strip doctors of their licenses for spreading dissenting views on Covid.
Censorship is now embraced even when the underlying information is true. In another recently disclosed disinformation project at Stanford University, experts insisted that even true stories could still be dangerous forms of disinformation if they contributed to “hesitancy” on vaccines or other issues.
As in these prior grants, it is not clear what Course Correct specifically defines “verifiably accurate information.” When pressed by the conservative site The College Fix, researchers reportedly failed to supply an answer. What constitutes “misinformation” depends on the views of the programmers. Yet, these systems are sold as somehow transcending bias and using science to protect us from our own bad ideas or biases.
Recently, we discussed the call of Bill Gates to use Artificial Intelligence (AI) to protect us from harmful thoughts or idea. In an interview on a German program, “Handelsblatt Disrupt,” Gates called for unleashing AI to stop certain views from being “magnified by digital channels.” The problem is that we allow “various conspiracy theories like QAnon or whatever to be blasted out by people who wanted to believe those things.”
Gates added that AI can combat “political polarization” by checking “confirmation bias.”
Confirmation bias is a term long used to describe the tendency of people to search for or interpret information in a way that confirms their own beliefs. It is now being used to dismiss those with opposing views as ignorant slobs dragging their knuckles across the internet — people endangering us all by failing to accept the logic behind policies on COVID, climate change or a host of other political issues.
This is not the first call for AI overlords to protect us from ourselves. Last September, Gates gave the keynote address at the Forbes 400 Summit on Philanthropy. He told his fellow billionaires that “polarization and lack of trust is a problem.”
The problem is again … well … people: “People seek simple solutions [and] the truth is kind of boring sometimes.”
Not AI, of course. That would supply the solutions. Otherwise, Gates suggested, we could all die: “Political polarization may bring it all to an end, we’re going to have a hung election and a civil war.”
Others have suggested a Brave New World where citizens will be carefully guided in what they read and see. Democratic leaders have called for enlightened algorithms to frame what citizens access on the internet. In 2021, Sen. Elizabeth Warren (D-Mass.) objected that people were not listening to the informed views of herself and leading experts. Instead, they were reading views of skeptics by searching Amazon and finding books by “prominent spreaders of misinformation.”
Warren blamed Amazon for failing to limit searches or choices: “This pattern and practice of misbehavior suggests that Amazon is either unwilling or unable to modify its business practices to prevent the spread of falsehoods or the sale of inappropriate products.” In her letter, Warren gave the company 14 days to change its algorithms to throttle and obstruct efforts to read opposing views.
The priority for the House should be to establish the full range of these grants by the Administration in the development of blacklisting or censorship tools. That should be in addition to the effort to gauge the direct work of federal employees in censorship efforts at companies like Twitter. We can debate the wisdom or risks of such work, but we should first have transparency on the full scope of censorship efforts by the federal government, including the use of academic and third-party organizations.
*
Note to readers: Please click the share buttons above. Follow us on Instagram and Twitter and subscribe to our Telegram Channel. Feel free to repost and share widely Global Research articles.
Featured image is from Search Engine Journal