News archive

Back

Internet content moderation and fact checking: how and why?

At the start of the year, we heard some surprising but expected news—Meta was ending its fact-checking program. The company said the reason was that it wasn’t very effective and limited people’s freedom of speech too much. This sparked a lot of discussion, so we took a closer look at the topic during Internet Day.
Internet content moderation and fact checking: how and why?
The talk was led by Henrik Roonemaa, with speakers Mari-Liis Somelar (fact-check reporter at Delfi), Andreas Kaju (managing partner at Meta Advisory), and Christian Veske (Commissioner for Equality and Equal Treatment).

When we talk about fact-checking and moderating content, we need to remember that rules are based on the culture and values of each region. That’s why it’s hard to compare how this is done in the U.S. versus Europe. Even in Europe, different countries handle it differently.

Experts agree that Europe needs to update how fact-checking is done. We've been using the same methods for many years, but it's clear that new ideas and formats are needed to make fact-checking more useful and effective.

What happens online can affect real life. When false information becomes normal, it’s a big problem. That’s why we need to limit disinformation to protect society and our cultural values.

In Estonia, a lot of disinformation comes from our eastern neighbor, and it often attacks European values. Since there is so much content on the internet, we can't control it all. Instead, we should focus on teaching young people and the public how to spot and handle false information.

On social media, algorithms and automated systems already limit what we see. The end of Meta’s fact-checking program is just one part of a bigger picture. Fighting misinformation is still very important in a free society. Right now, AI can’t fully take over this job—we still need humans to understand context, check facts, and decide what’s true or false. People can think critically in ways AI can't yet.

False information is often linked to trying to influence elections. One of the most famous examples was the 2016 U.S. presidential election. But to this day, there's no clear proof that it changed the final result—likely because the U.S. has such a large number of voters.

In a democracy, we have to accept that we won’t always like the results, and not every surprising outcome is caused by outside influence.

In Estonia, one of the biggest debates has been about anonymous online comments. News websites now track IP addresses to stop bad behavior, but many of these comments have moved to Facebook. People now post mean or inappropriate comments using their real names and photos.

This can harm vulnerable groups. A study by the Estonian Human Rights Center found that young women especially feel targeted. They feel unsafe online, are less likely to share opinions, and take part in fewer discussions. That creates an unbalanced public debate, where some voices are missing, and it may give a false picture of what people actually think.

If we stopped all fact-checking and moderation, what would happen? A good example is Telegram, a platform where harmful and illegal content spreads freely. We even saw a smaller version of this earlier in the year when Meta eased up on moderation, and people suddenly saw more violent posts on Instagram.

Even though social media companies want to keep users on their platforms, they know that some rules are needed. Algorithms still show people things they’re interested in, but they also need to keep the space safe and welcoming.

In a free society, we can’t ban content just because we don’t like it. While we can disagree with bad behavior, punishing it harshly is not always the answer. Instead, the focus should be on education—especially for young people. They need the skills to think critically and make smart choices about the content they see.

Too many bans can backfire. They might even divide people more. The better path is to teach, guide, and encourage respect, so that everyone can safely take part in conversations—both online and offline.

Email again: