Regulators and policy-makers disagree on how to tackle misinformation, but there has never been a broader consensus that something must be done. The US elections and the Covid-19 pandemic have given this issue more urgency and pushed online platforms to self-regulate in a way that has never seen before, particularly around political advertising.

Regulatory Trends

Listed below are the key regulatory trends impacting the misinformation theme, as identified by GlobalData.

Initiatives against misinformation

Since 2018, several initiatives from governments worldwide have attempted to stem the flow of online misinformation. Most of the initiatives are, therefore, either proposals or laws that apply to a specific period.

In 2018, France passed a law to fight misinformation during elections. The legislation gives authorities the power to remove fake content spread via social media, block the sites that publish it, and demand more financial transparency around sponsored content in the three months before elections.

In the UK, the Online Harms white paper published by the Department for Digital, Culture, Media, and Sport (DCMS) in 2019 aims “to create a system of accountability and oversight for tech companies beyond self-regulation.” Under the proposal, companies are responsible for published content that includes disinformation and extremist content.

How well do you really know your competitors?

Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.

Company Profile – free sample

Thank you!

Your download email will arrive shortly

Not ready to buy yet? Download a free sample

We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below form

By GlobalData

In 2020, the UK government also designated the communication watchdog Ofcom to regulate the internet. Ofcom will require internet companies such as Facebook and Google to publish explicit statements setting out the content and behaviour they deem acceptable on their sites.

In 2019, the Australian parliament passed some of the world’s toughest laws to penalise social media platforms for violent content. Drafted in the wake of the Christchurch terrorist attack, which was live-streamed on social media, the law states that executives of platforms that do not remove “abhorrent, violent material” could face up to three years in prison. Companies could also be fined A$10.5m ($7.7m) or 10% of the site’s annual turnover, whichever is larger.

This is an edited extract from the Misinformation – Thematic Research report produced by GlobalData Thematic Research.