Tech executives threatened with jail time under proposed U.K. content law. PLUS: just what is “untoward content” in Russia?

Home / Uncategorized / Tech executives threatened with jail time under proposed U.K. content law. PLUS: just what is “untoward content” in Russia?

Executives from Facebook parent Meta Platforms, TikTok and other big tech companies would face the prospect of jail time under sweeping new legislation proposed by the U.K. government that aims to curb illegal and harmful internet content.

Meanwhile, in Russia, a challenge to “editorial policy”.

BY:

Salvatore Nicci
Technology Analyst / Reporter
PROJECT COUNSEL MEDIA

 

17 March 2022 (London, UK) – Under the Online Safety Bill, to be introduced today, tech executives may face criminal prosecution if they fail to comply with decisions made by the regulator in charge of enforcing the law. That penalty would kick in two months after the legislation, if passed, comes into effect, according to a briefing of the plans sent to us. That’s a more aggressive timeline than the two years mapped out in draft proposals last year.

The provision to include liability for senior managers, not just companies, has been dubbed the “Nick Clegg law” in Downing Street and among U.K. lobbyists in reference to Meta Platforms’ president of global affairs, two people familiar with the matter said. Executives will be found criminally liable if they destroy evidence, give false information to the regulator or obstruct officials from entering company offices, the briefing said.

Companies including Meta, along with civil society groups, have been lobbying against the bill, concerned that it would put onerous burdens on tech firms and curtail free speech. But the government’s move to narrow the window for compliance will effectively start a timer for the industry. The U.K. government hopes to pass the law by the end of this year or by early 2023, according to a person familiar with the matter.

For several years, the U.K. government has been drafting the Online Safety Bill, arguing that radical changes to regulation were needed to keep children and teenagers safe on the internet. The proposed legislation would impose fines on companies of up to 10% of their global revenue if they fail to remove illegal content such as child sex abuse material. The bill would also require platforms to protect younger users from harmful content. The government’s briefing flagged content that was linked to self-harm, harassment and eating disorders.

The proposals would effectively roll back long-standing liability protections for tech companies, commonly known as the “safe harbor” principle created by Europe’s e-Commerce Directive. The protections are akin to those under Section 230 of the Communications Decency Act in the U.S., which largely shields internet companies from liability for content posted by users. Some legislators in the U.K. want companies to be held legally accountable for all content that appears on their platforms, even encrypted messages.

While some of the tougher parts of the legislation appear controversial in tech circles, the Boris Johnson–led government is confident the public largely supports the bill, particularly the measures aimed at protecting children, according to a person familiar with the matter. The bill was crafted against the backdrop of individual stories about children and teenagers affected by self-harm and eating disorder posts on TikTok and Instagram, incidents that regularly made the front pages of the country’s news media. Recent polling on parts of the legislation shows a majority of respondents favor the Online Safety Bill. Due to Johnson’s Conservative Party commanding a majority of seats in the House of Commons, it is expected to pass.

Some big tech companies have been hiring lobbyists and policy specialists in preparation for the bill’s introduction. In February, Meta tapped Monica Thurmond Allen, a former special adviser to Clegg, as public policy director in London, an appointment first reported by Politico. The company is also searching for a Europe-based lobbyist focused on the metaverse, according to a public job listing.

At TikTok, Theo Bertram, who previously served as a senior adviser to former U.K. prime minister Gordon Brown, has been leading the video app’s Europe lobbying efforts since 2019. And the company has hired Nancy Jones, a policy manager at communications regulator Ofcom—the U.K. body tasked with enforcing the new laws. She’s joined TikTok’s Dublin-based global trust and safety team, which helps make and enforce the company’s guidelines for what users can post.

The proposed legislation could set the pace for other countries to follow. Post-Brexit, the U.K. under the Conservative Party government has been going harder on big tech than its counterparts in Europe, a dynamic that was on display when the U.K. antitrust authority moved to block Facebook’s acquisition of GIF-maker Giphy last year. The U.K. law, if passed, could create a ripple effect of other legislation across the world, similar to what happened with Europe’s privacy laws, the General Data Protection Regulation. In a BBC story this morning, Richard Allan, a former policy vice president at Meta who serves as a member of the U.K. House of Lords, said:

“I think you’ll certainly get the Australia, New Zealand, Canada club looking over each others’ shoulders at what they’re doing, and I think there’ll be consistency there”.

Tech industry groups have already been pushing back against the proposed rules, complaining they would place undue burdens on small- and medium-size tech businesses. Meta Platforms last year told a joint parliamentary committee that reviewed a draft of the bill last year that the definition of “harm” was too vague, and that the risk of big fines could prompt companies to overcorrect and take down too much content.

Civil liberties groups, which are not typically aligned with tech companies such as Meta, have also come out in opposition to the bill. Said Mark Johnson, legal and policy officer for nonprofit Big Brother Watch:

“The government should be adopting a rule of law approach to online speech and reining in, not empowering, big tech speech police. But by compelling platforms to target lawful speech, which is deemed to be ‘harmful,’ they are making social media censorship state-backed.”

After a vigorous lobbying campaign by publishers in recent years, the government has also designed the new laws with a significant carve-out: news content. While social media platforms, apps and search engines will face the brunt of the new regulations, the rules will exempt the content of companies running news websites, according to the briefing.

It will be an uphill battle for policy teams at tech companies to seriously weaken the laws by lobbying MPs to add amendments during the parliamentary debate. The teams’ attention will likely shift to winning policy debates with the regulator about how to interpret and enforce the laws, said one tech industry lobbyist.

 

Each morning our team scans 900+ pieces of content via our API news feed. It is a proprietary system built by our CTO which delivers news content and data at scale. It searches and filters news content by entities, categories, topics, and sentiment and then automatically assigns that content to specific buckets. If we want we can then run an article through our artificial intelligence program that is a text summarizing tool that automatically condenses long articles, documents, essays, or papers into key summary paragraphs.

Just an FYI for our new readers not yet up to speed: an application programming interface (API) is a connection between computers or between computer programs. It is a type of software interface, offering a service to other pieces of software. A document or standard that describes how to build or use such a connection or interface is called an API specification. A computer system that meets this standard is said to implement or expose an API. The term API may refer either to the specification or to the implementation.

 

I’ve been working through the digital pile tossed in my face each morning and noted some material related to our UK story above. Mashable reveals that Substack is losing writers (again) due to censorship which is a story about misinformation and content. The story “Why Substack Creators Are Leaving the Platform, Again” explains how Substack’s management fell on its sword.

Meanwhile, TikTok’s problems with content moderation in Russia are explained in “TikTok Struggles to Find Footing in Wartime” explains that figuring out how to deal with what Mr. Putin perceives as untoward content. Amazon Twitch faces similar challenges. And there is YouTube, Twitter, and Facebook. They struggle with their “editorial policy”. Our boss, Greg Bufithis, says it is especially a problem with TikTok because its platform is a tsunami of verifiable OSINT on the Ukraine war. As the article notes:

TikTok exploded as a social-media app with silly videos featuring lip-syncing, dance moves and practical jokes. Now some users are creating endless feeds of war memes and state propaganda that are influencing global perspectives on the conflict in Ukraine.

As tensions between Russia and Ukraine rose, TikTok grappled internally with how to deal with its heightened role in geopolitics, people familiar with the matter said. Some of TikTok’s content moderators struggled to figure out whether to avoid recommending certain posts, remove them from the app or restrict the creators’ accounts, they said.

The content moderators have also been confused about how to deal with some clips flagged by the app’s content-filtering systems, the people said. Without detailed instructions in place for war-related content, junior-level managers were charged with refining the rules as they went along, the people said. The result was inconsistencies in treatment of similar content, they said.

The content moderators were confused because there was no content policy. But the issue is that none of these high tech outfits attended to the value of what most call “editorial policy.” The old idea was that there are guidelines developed by professionals working in an information generating operation develop. These are discussed, debated, and written down. Once they have been written down, the guidelines are reviewed, presented when new employees are hired, summarized in user documentation, described in training sessions, and mentioned (briefly or in detail) in conference posters or presentations. The main idea is to demonstrate a set of guidelines that the information generation outfit followed.

But that has been turned on its ear. As Greg noted:

“For 20+ years I have delivered briefings on digital media, and content and “what-happens-when-everybody-is-an-expert-because-anybody-can-get-on-the-internet” to start ups, venture funds, and scores of professional groups.I can say, based on my experience, that once the Internet made everyone into an expert, very few found editorial policies particularly relevant. Now the zippy types are figuring it out. The problem is that effort is needed. Retroactive editorial policies ain’t gonna work. Disciplined thinking is necessary even if you are going to go-with-the-flow. But even then continuity and management commitment are still important”.

And it is certainly go-with-the-flow here at Project Counsel Media. Since Russia invaded Ukraine on 24 February we have poured over photos and video clips uploaded to platforms including TikTok, Facebook, YouTube and TikTok – the latter in particular because it has exceeded he other in providing a ground-level, often visceral view of modern warfare. But all of these platforms have become a hotbed of unreliable information, hence our need and reliance on our OSINT network to verify everything.

Related Posts