Online misinformation, in its most successful forms, can impact its audience in different ways. Malicious actors bent on influencing public opinion on a specific topic may not necessarily try to convince users to take a specific view, as this can be difficult. Instead, they will spread a large volume of conflicting messages to persuade citizens either that the issue is too complicated for them to understand fully or that there is no right answer.

Technology Trends

Listed below are the key technology trends impacting the misinformation theme, as identified by GlobalData.

Microtargeting

In the era of instant gratification, users decide quickly what they want, which means that companies must deliver content that is as personalised as possible. This affects not only marketing but also political advertising. According to a US Senate Select Committee on Intelligence report, Russia’s Internet Research Agency used digital personalisation techniques to interfere in the 2016 US presidential elections.

This campaign specifically targeted African-Americans, with misinformation used to generate outrage against other social groups, co-opt participation in protests, or even convince individuals not to participate in the elections at all.

Bad bots

Bots are autonomous programmes on a network that can interact with systems or users. On social media, bots can impersonate real people, for example, by automatically writing messages. Multiple bots acting together can create a buzz around a person, product, or topic and push a particular point of view or agenda. Bots can amplify the reach of disinformation by pushing specific messages, hashtags, or accounts, creating the impression that a particular perspective is popular and, therefore, more likely to be true.

How well do you really know your competitors?

Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.

Company Profile – free sample

Thank you!

Your download email will arrive shortly

Not ready to buy yet? Download a free sample

We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below form

By GlobalData
Visit our Privacy Policy for more information about our services, how we may use, process and share your personal data, including information of your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.

Researchers at Brown University reported that a quarter of tweets on climate change were likely posted by bots to spread climate-denial propaganda. Similarly, many myths about Covid-19 are pushed by bots on social media feeds.

Content-shaping algorithms

Facebook’s News Feed, Twitter’s Timeline, and YouTube’s recommendation engine are all examples of algorithms that determine the content that individual users see online. Algorithms also determine what ads users should be shown and when. These algorithms have learned to prioritise content with greater prior engagement. For example, Twitter’s algorithm has taught itself that users are more likely to engage if they see content that has already received a lot of retweets and mentions, compared with content that has less engagement.

Inflammatory and shocking content is most likely to generate a quick reaction, as it taps into the users’ existing opinions. In the worst-case scenario, this turns social media into a confirmation bias machine.

Human content moderation

Social media companies stand accused of spreading misinformation without considering the consequences. In response, they have made significant investments in content moderation.  Facebook has created an oversight board, an independent, international panel of experts on free expression who will have the final say on content moderation decisions.

Critics point out that such initiatives are relatively insignificant compared to Facebook’s power to amplify content and label the initiative as a form of greenwashing. It is also doubtful that the board will have any say on possible changes to Facebook’s algorithms.

Content moderation algorithms

Evidence of Russian interference in the 2016 US presidential elections has led to Big Tech companies launching new initiatives around fact-checks and content restrictions, including adjusting content moderation algorithms to identify harmful content more effectively. These algorithms detect content that breaks a company’s rules and remove it from the platform without any human involvement.

While these systems are well-equipped to eliminate images of specific symbols, like a swastika, they cannot replace human judgment when it comes to violent, hateful, or misleading content with some public interest value. Also, the use of algorithms often creates new problems. As reported in misinformation research published by the NewAmerica think tank in 2019, algorithms trained to identify hate speech are more likely to flag social media content created by African Americans, including posts that discuss personal experiences with racism in the US.

This is an edited extract from the Misinformation – Thematic Research report produced by GlobalData Thematic Research.