Social media: How might it be regulated?

  • Published
Social media logos

Several countries around the world are considering regulating social media - but what might that look like?

A new report has put forward a tranche of ideas that its authors say could end the "informational chaos that poses a vital threat to democracies".

One of its suggestions is that social networks should be required to release details of their algorithms and core functions to trusted researchers, in order for the technology to be vetted.

It also suggests adding "friction" to online sharing, to prevent the rampant spread of disinformation.

The report was published by the Forum for Information and Democracy, which was established to make non-binding recommendations to 38 countries. They include Australia, Canada, France, Germany, India, South Korea and the UK.

Among those contributing to the report were Cambridge Analytica whistleblower Christopher Wylie, and former Facebook investor Roger McNamee - a long-time critic of the social network.

Free expression group Article 19 and digital rights groups including the Electronic Frontier Foundation were also consulted.

What does the report suggest?

One of the core recommendations is the creation of a "statutory building code", which describes mandatory safety and quality requirements for digital platforms.

"If I were to produce a kitchen appliance, I have to do more safety testing and go through more compliance procedures to create a toaster than to create Facebook," Mr Wylie told the BBC.

He said social networks should be required to weigh up all the potential harms that could be caused by their design and engineering decisions.

Image caption,
Christopher Wylie revealed how Cambridge Analytica used millions of people's Facebook data for targeted campaigns

The report also suggests social networks should display a correction to every single person who was exposed to misinformation, if independent fact-checkers identify a story as false.

Other suggestions include:

  • implementing "circuit breakers" so that newly viral content is temporarily stopped from spreading while it is fact-checked
  • forcing social networks to disclose in the news feed why content has been recommended to a user
  • limiting the use of micro-targeting advertising messages
  • making it illegal to exclude people from content on the basis of race or religion, such as hiding a spare room advert from people of colour
  • banning the use of so-called dark patterns - user interfaces designed to confuse or frustrate the user, such as making it hard to delete your account

It also included some proposals that Facebook, Twitter and YouTube already do voluntarily, such as:

  • labelling the accounts of state-controlled news organisations
  • limiting how many times messages can be forwarded to large groups, as Facebook does on WhatsApp

The three businesses were sent a copy of the report on Wednesday and the BBC invited them to comment.

Twitter's head of public policy strategy, Nick Pickles, said: "Twitter is committed to building a safer internet and improving the health of the public conversation. We support a forward-looking approach to regulation that protects the Open Internet, freedom of expression and fair competition in the internet sector.

"Openness and transparency is central to Twitter's approach, as embodied by our public API, our information operations archive, our commitment to user choice, our decision to ban political advertising and label content to provide more context and information, and our disclosures in the Twitter Transparency Report.

"However, technology companies are not all the same, and nor is technology the only part of the media ecosystem. It is essential to ensure a whole of society response to tackle these important issues."

In an interview with BBC News, Mr Wylie said the report's recommendations had been designed to protect individuals' free expression.

The following has been edited for brevity and clarity.

Whenever social media regulation is proposed, there are concerns about stifling free speech. Don't your proposals pose such a risk?

In most Western democracies, you do have the freedom of speech. But freedom of speech is not an entitlement to reach. You are free to say what you want, within the confines of hate speech, libel law and so on. But you are not entitled to have your voice artificially amplified by technology.

These platforms are not neutral environments. Algorithms make decisions about what people see or do not see. Nothing in this report restricts your ability to say what you want. What we're talking about is the platform's function of artificially amplifying false and manipulative information on a wide scale.

Who defines what counts as misinformation?

I guess this gets down to something fairly fundamental: do you believe in truth? There are some objectively disprovable things spreading quite rapidly on Facebook right now. For example, that Covid does not exist and that the vaccine is actually to control the minds of people. These are all things that are manifestly untrue, and you can prove that.

Our democratic institutions and public discourse are underpinned by an assumption that we can at least agree on things that are true. Our debates may be about how we respond or what values we apply to a particular problem, but we at least have a common understanding that there are certain things that are manifestly true.

Would regulation stifle the free flow of ideas and people's right to believe whatever they wanted?

If we took the premise that people should have a lawful right to be manipulated and deceived, we wouldn't have rules on fraud or undue influence. There are very tangible harms that come from manipulating people. In the United States, the public health response to Covid-19 has been inhibited by widespread disinformation about the existence of the virus or false claims about different kinds of treatment that do not work.

Do you have a right to believe what you want? Yes, of course. No-one that I know of is proposing any kind of sort of mind or mental regulation.

But we have to focus on the responsibility of a platform. Facebook, Twitter and YouTube create algorithms that promote and highlight information. That is an active engineering decision.

When the result is an inhibited public health response to a pandemic or the undermining of confidence in our democratic institutions, because people are being manipulated with objectively false information, there has to be some kind of accountability for platforms.

But Facebook says it does work hard to tackle misinformation and doesn't profit from hate speech.

An oil company would say: "We do not profit from pollution." Pollution is a by-product - and a harmful by-product. Regardless of whether Facebook profits from hate or not, it is a harmful by-product of the current design and there are social harms that come from this business model.

Before the US election, Facebook and Twitter laid out what they would do if a candidate declared victory early or disputed the result. We have seen both apply context labels to President Donald Trump's tweets. Do you think they were more prepared for the 2020 election?

It is clear that Facebook really hasn't done enough planning.

Look at the groups that are bubbling up every single day that are spreading disinformation about "cheating" in the US election and promoting all kinds of other conspiracy theories about the Biden campaign. This was a foreseeable outcome.

The way Facebook approaches these problems is: we'll wait and see and figure out a problem when it emerges. Every other industry has to have minimum safety standards and consider the risks that could be posed to people, through risk mitigation and prevention.

If you regulated the big social networks, would it push more people on to fringe "free speech" social networks?

If you have a platform that has the unique selling point of "we will allow you to promote hate speech, we will allow you to deceive and manipulate people", I do not think that business model should be allowed in its current form. Platforms that monetise user engagement have a duty to their users to make at least a minimum effort to prevent clearly identified harms. I think it's ridiculous that there's more safety consideration for creating a toaster in someone's kitchen, than for platforms that have had such a manifest impact on our public health response and democratic institutions.

What about other issues such as the way "perfect" images on Instagram can affect mental health and body image?

This is a product of a platform that is making recommendations to you. These algorithms work by picking up what you engage with and then they show you more and more of that.

In the report, we talk about a "cooling-off period". You could require algorithms to have a trigger that results in a cooling-off period for a certain type of content.

If it has just spent the past week showing you body-building ads, it could then hold off for the next two weeks. If you want to promote body building, you can.

But from the user's perspective, they should not be constantly bombarded with a singular theme.