Ultimate magazine theme for WordPress.

Facebook whistleblower on ‘harmful but legal’ content | FT interview

0 10

You can enable subtitles (captions) in the video player

The Online Safety Bill in the UK, or in the case of the Digital Services Act in the EU, both right now they’re wrestling with the idea of content which is harmful but legal. So for example, Facebook’s algorithms… so like, when they’re doing risk assessments right now they’re limited to illegal content. So it’s like terrorism.

But they wouldn’t have to cover things like the fact that these algorithms pull people down rabbit holes. And in the case of kids they can follow very neutral interests like healthy eating and be drawn into anorexia content. That’s not illegal, but it’s really harmful, and kids die as a result of those things. And so I really hope that both the European Union and the UK can put forth strong, robust legislation, because if we limit it just to illegal content I think we’ll have really missed a huge opportunity.

Do you think that there is a real chance for change here? Do you see, based on all the people that you’ve met, a willingness to respond to what you’ve said, rather than a set agenda or a kind of politicised agenda? Do you see real change coming?

I think there is a huge opportunity here. Like I’ve been amazed at how involved people then, how high quality the questions. I think there’s a real hunger to make sure that there is a means to hold Facebook accountable, or other social media platforms.

How has that been different to your experience in the US with lawmakers, that motivation to regulate? Do you see similarities or differences?

I think there is a lot of bipartisan support for doing something. The problem is that Facebook has done a really good job of reducing the conversation down to, basically, do we break up the company? Or are we censoring too much, or are we censoring too little?

And the thing that I think is really exciting about my disclosures is the reality is there are many, many, many, many more solutions, many of which are content neutral, right? It’s not about picking good ideas or bad ideas. It’s about how do we make the platform safer as a whole? And I think expanding that conversation and saying, like, how do we want to interact with the platform is opening a door to having different kinds of conversations. Because when we were stuck in that world of, like, do we break the company up, or like, how much censorship should we have, we were deadlocked.

I am incredibly grateful for all the advocates, all the researchers, all the people who have really prepped legislators in the EU and the UK, because it’s very clear that you guys are just further down the road. And I think part of that is Facebook invested a huge sum in making the US a little bit safer. And so I think there is less pressure to fix it because we’re not living with the consequences of these platforms. And that’s, like, a core problem.

You’ve talked a lot about the problems, particularly the multilingual issues and with AI models, which Facebook often touts as the solution, right? Even one of our editors, Gillian Tett, met somebody last week from Facebook who said, no, this is going to change everything. Like, AI is the answer, right? But we’ve seen that it doesn’t work if you don’t have it in every single language that you operate in. And you know, we were talking about Assamese as a language, but there’s also Arabic and Spanish. And if AI doesn’t exist, then the problems continue.

But then also, do we need more humans then? Are we going to need to have humans who are moderating every single language? What is a solution to the problem of moderation across even TikTok and others? Is it humans, AI, or where do we sit? Where do we kind of end up?

So the problem with having more humans is we will likely never hire enough humans in order to do this. And I worry sometimes about over enforcement, or like, who gets to decide what is a good or bad idea? Like, I feel very nervous about that. And so I would rather have us have a more conservative solution where we make the platform safer, but we don’t touch them all the time. Because when we touch them all the time… like, Mark came in after my first testimony to Congress and said polarising extreme content? It’s OK. We’re just not going to put politics in your newsfeed anymore.

What does that mean? Who gets to define what’s political or not political? They never disclosed how they made that political classifier. They’ve not disclosed examples from it. That is not safe in a democracy.

So Facebook needs to not be allowed to say anymore AI is the solution, when their own documents say, at best, in the case of hate speech, we will catch 10 per cent to 20 per cent of hate speech. And I want to hear their commitment.

There’s a huge wave of political ads, but also other types. It could be around body image. It could be around eating. So many kind of mental health issues that come through from adverts that are also promoted heavily. Does ad tech need to be reformed as well?

I think there is a big need to have a conversation on ad tech. There are some examples that I think are completely not obvious. So for example, engagement-based ranking happens on ads as well. And so it means that a hateful political ad is 5 to 10 times cheaper than an empathetic and compassionate ad. So we are subsidising hate today in our political adverts.

How has it been for you since the first kind of stories coming out in The Wall Street Journal? And kind of, can you tell us a bit about how, kind of up to today, what’s your journey been like?

So my original plan was I wanted to be behind the scenes. Like, I really don’t like attention. Like, I don’t throw birthday parties, for example, because I feel uncomfortable with being the centre of attention. I definitely did not expect the days to be this long when I decided to come out.

So I decided to come out because my lawyers were totally reasonable. They were very practical. They said, hey, you want to work with regulators. Like, you’ve said that since the beginning. Your job is to explain and facilitate. Because these documents are really complicated. Like, you need to be an expert to get at least the bare bones of context.

And they said, let’s be realistic here. Every time you have a meeting with 10 people, the circle of trust gets 10 people bigger. And the only thing keeping you safe is obscurity. And as soon as you’re in a zone where you have to, like, have 100 per cent confidence that no one is ever going to leak your name, like, you basically are at risk all the time. And so if you want to be able to work with regulators, if you want to maximise your impact, you’re going to have to come forward.

Leave A Reply

Your email address will not be published.