Join Our Newsletter - Get important industry news and analysis sent to your inbox – sign up to our e-Newsletter here
X

Misinformation – Regulatory Trends

By GlobalData Thematic Research 15 Apr 2021 (Last Updated May 6th, 2021 11:27)

Regulators and policy-makers disagree on how to tackle misinformation, but there has never been a broader consensus that something must be done. The US elections and the Covid-19 pandemic have given this issue more urgency.

Misinformation – Regulatory Trends
Credits: Boris15/ Shutterstock.com.

Regulators and policy-makers disagree on how to tackle misinformation, but there has never been a broader consensus that something must be done. The US elections and the Covid-19 pandemic have given this issue more urgency and pushed online platforms to self-regulate in a way that has never seen before, particularly around political advertising.

Regulatory Trends

Listed below are the key regulatory trends impacting the misinformation theme, as identified by GlobalData.

Initiatives against misinformation

Since 2018, several initiatives from governments worldwide have attempted to stem the flow of online misinformation. Most of the initiatives are, therefore, either proposals or laws that apply to a specific period.

In 2018, France passed a law to fight misinformation during elections. The legislation gives authorities the power to remove fake content spread via social media, block the sites that publish it, and demand more financial transparency around sponsored content in the three months before elections.

In the UK, the Online Harms white paper published by the Department for Digital, Culture, Media, and Sport (DCMS) in 2019 aims “to create a system of accountability and oversight for tech companies beyond self-regulation.” Under the proposal, companies are responsible for published content that includes disinformation and extremist content.

In 2020, the UK government also designated the communication watchdog Ofcom to regulate the internet. Ofcom will require internet companies such as Facebook and Google to publish explicit statements setting out the content and behaviour they deem acceptable on their sites.

In 2019, the Australian parliament passed some of the world’s toughest laws to penalise social media platforms for violent content. Drafted in the wake of the Christchurch terrorist attack, which was live-streamed on social media, the law states that executives of platforms that do not remove “abhorrent, violent material” could face up to three years in prison. Companies could also be fined A$10.5m ($7.7m) or 10% of the site’s annual turnover, whichever is larger.

This is an edited extract from the Misinformation – Thematic Research report produced by GlobalData Thematic Research.

Up Next