Ofcom Releases its Proposal for Implementing the Online Safety Act

Ofcom has published a detailed overview on how it intends on enforcing the Online Safety Act. We take a look at this proposal.

Earlier, we noted that UK regulator, Ofcom, was going to release a proposal of how it intends on enforcing the Online Safety Act. That release did happen and information about it can be found here.

The proposal contains documents including their proposal at a glance (PDF).

Separating Out Services

One of the first things the document does is separate what is large and small:

1.3 Whether some of the measures are recommended for a particular service can depend on the size of the service and how risky it is. The different columns show different types of services. The columns are divided into two groups by size:
a) Large services. As discussed further below, we propose to define a service as large where it has an average user base greater than 7 million per month in the UK, approximately equivalent to 10% of the UK population.
b) Smaller services. These are all services that are not large, and will include services provided by small and micro businesses.1.4 We sub-divide each of these broad size categories into three:
a) ‘Low risk’ refers to a service assessed as being low risk for all kinds of illegal harm in its risk assessment.
b) ‘Specific risk’ refers to a service assessed as being medium or high risk for a specific kind of harm for which we propose a particular measure. Different harm-specific measures are recommended depending on which risk a service has identified. A service could have a single specific risk, or many specific risks. We are not currently proposing harm specific measures for specific risks of each kind of harm. The notes beneath Table 1 explain which risks of a kind of harm different measures relate to.
c) ‘Multi risk’ refers to a service that faces significant risks for illegal harms. For such services, we propose additional measures that are aimed at illegal harms more generally, rather than being targeted at specific risks. As described in paragraph 11.46, our provisional view is to define a service as multi-risk where it is assessed as being medium or high risk for at least two different kinds of harms from the 15 kinds of priority illegal harms set out in the Risk Assessment Guidance.

Apart from putting a hard number of what constitutes a big or small service (where the threshold is 7 million UK users), this alone… doesn’t really say a whole lot. The regulator is adding these categories for seemingly arbitrary reasons given that it just wants to “assess” them.

Expectations for Each Category

The document does contain a massive chart detailing what appears to be the expectations are for each type of platform. In one example, they describe whatever it is considered a multi-risk platform of any size to have this:

Content moderation systems or processes are designed to take down illegal content swiftly

The real question is, what constitutes “swiftly”? A week? 24 hours? 2 hours? 5 minutes? This is really swinging for the fences on this one and it’s unclear how many platforms, regardless of size, can make any of this workable.

There are, of course, a multitude of other expectations that platforms of all shapes and sizes are expected to follow beyond that. This includes the following:

Appropriate action: indicative timeframes for considering complaints should be sent to complainants

Appropriate action for complaints: illegal content complaints should be handled in accordance with our proposed content moderation recommendations

I guess the UK wants to take over the content moderation of platforms.

A named person is accountable to the most senior governance body for compliance with illegal content safety duties, and reporting and complaints duties

Wouldn’t want to draw the short straw on that one.

Systems and processes are designed so that search content that is illegal content is deprioritised or deindexed for UK users

I wouldn’t personally know where to begin with that one.

Appropriate action for complaints: illegal content complaints should be handled in accordance with our proposed search moderation recommendations

Again with the government taking over a complaints structure.

This next one is for the larger platforms:

Users have a means to easily report predictive search suggestions which they believe can direct users towards priority illegal content

WTF?

Duty to carry out a further suitable and sufficient risk assessment before making any significant change to any aspect of a service’s design or operation.

So, if you change that hex value for a text colour to change it from black to a dark grey, better make sure to get that risk assessment in. Don’t want anyone harmed by the new shade of grey.

We propose to include as guidance that as a minimum, service providers should conduct a compliance review at least once a year. Services should review their risk assessments annually.

A yearly assessment. Yeah, because that won’t bump up the cost of doing business. Not at all!

Records should be updated to capture changes to a risk assessment or Code measure, but earlier versions should be retained so the provider is able to provide both current and historic records of how it has complied with the relevant duties

Have fun with the storage costs of that one!

A Second Document

Along with this is a second document offering some additional details (PDF). This document apparently contains a summary of different chapters. The document offers this near the beginning:

This has a number of implications for our work:
• Firstly, we will flex our expectations depending on the type of service we are dealing with – we will not expect the same of a small low risk service as we do of the largest or riskiest services.
• Secondly, we will need to adapt our approach and expectations over time to reflect the emergence of new technologies and types of U2U or search services. We will scan the horizon for new developments and, when necessary, we will update our codes to reflect the emergence of new risks and new options for mitigating risks. As we explain below, we will also expect services to monitor the emergence of new risks.
• Thirdly, as described in our background section1 we will need to use a combination of different regulatory levers to achieve our goals and to use different levers to influence different types of service. For example, sometimes we will seek to drive change by: setting expectations in our codes of practice; taking enforcement action against services which are not complying with the regulations; using our research and our transparency reporting powers to shine a light on what services are doing to tackle online harms and generating reputational incentives for them to make improvements; and engaging with services and discussing with them where we consider they should be doing more to improve user safety.

Again, what constitutes a less risky and more risky service? The excerpt doesn’t really say. All we know is that the regulator will make that determination using methods that seemingly only they know. Go figure.

What is also notable is that the regulator seemingly is saying that it sets the expectations and standards. It’s up to the platforms, regardless of size, to comply with those expectations. This does set the stage for the regulator to be more of an “ideas person” and expecting the platforms to basically carry out whatever magical thinking the regulator comes up with. After all, failure to comply could mean penalties. Now who wouldn’t want to operate in that kind of business environment?

Lofty Expectations

The document then goes on to detail how anything from livestreaming to recommendation systems are used for nefarious purposes:

Although a very wide range of service types pose risks of the priority illegal harms in the Act, certain service types appear to play a particularly prominent role in the spread of priority illegal content. In particular, our analysis suggests that file-storage and file-sharing services and adult services pose a particularly high risk of disseminating CSAM, and social media services play a role in the spread of an especially broad range of illegal harms. Similarly, certain ‘functionalities’ stand out as posing particular risks:

•End-to-end encryption: Offenders often use end-to-end encrypted services to evade detection. For example end-to-end encryption can enable perpetrators to circulate CSAM,engage in fraud, and spread terrorist content with a reduced risk of detection.

•Pseudonymity and anonymity: There is some evidence that pseudonymity (where a person’s identity is hidden from others through the use of aliases) and anonymity can embolden offenders to engage in a number of harmful behaviour with reduced fear of the consequences. For example, while the evidence is contested, some studies suggest that pseudonymity and anonymity can embolden people to commit hate speech. At the same time, cases of harassment and stalking often involve perpetrators creating multiple fake user profiles to contact individuals against their will and to circumvent blocking and moderation.

•Livestreaming: There are many examples of terrorists livestreaming attacks. This can in turnincite further violence. The use of livestreaming remains a persistent feature of far-right lone attackers, many of whom directly reference and copy aspects of previous attacks. Similarly,perpetrators can exploit livestreaming functionality when abusing children online. For instance, livestreaming can be used as a way of conducting child sexual abuse by proxy,where children are coerced into abusing themselves or other children in real-time on camera.

• Recommender systems: Recommender systems are commonly designed to optimise for user engagement and learn about users’ preferences. Where a user is engaging with harmful content such as hate speech or content which promotes suicide, there is a risk that this might result in ever more of this content being served up to them.

Ofcom then later does a lot of throat clearing and tries to make it sound like what they are asking is reasonable:

The functionalities we describe above are not inherently bad and have important benefits. End-to-end encryption plays an important role in safeguarding privacy online. Pseudonymity and anonymity can allow people to express themselves and engage freely online. In particular, anonymity can be important for historically marginalised groups such as members of the LGBTQ+ community who wish to talk openly about their sexuality or explore gender identity without fear of discrimination or harassment. Recommender systems benefit internet users by helping them find content which is interesting and relevant to them. The role of the new online safety regulations is not to restrict or prohibit the use of such functionalities, but rather to get services to put in place safeguards which allow users to enjoy the benefits they bring while managing the risks appropriately.

The problem is, of course, the safeguards being asked for. For instance, a well known ask is that the government wants encryption to be broken so that police have back door access to encrypted communications. This all under the guise of “safeguards” or “guard rails”. All this in spite of the fact that what is being asked is impossible.

The document gets even more wild with this:

We are making the following proposals for all multi-risk services and all large services:
• Written statements of responsibilities for senior members of staff who make decisions related to the management of online safety risks.
• Track evidence of new kinds of illegal content on their services, and unusual increases in particular kinds of illegal content, and report this evidence through the relevant governance channels. U2U services should also track and report equivalent changes in the use of the service for the commission or facilitation of priority offences.

So, apparently, it’s the platforms responsibility to predict, track, and report anything considered “harmful”. This is basically the very definition of chasing after a moving target. There’s no way any platform can reasonably be expected to 100% comply with this. Yet, somehow, that’s the expectation being set here.

A Shadow Ban Hammer

Ofcom is also expecting to manipulate search results with this:

We are making the following proposal for all search services:
• Have systems or processes designed to deindex or downrank illegal content of which it is aware, that may appear in search results. In considering whether to deindex or downrank the content concerned, services should have regard to the following factors: (i) the prevalence of illegal content hosted by the interested person; (ii) the interests of users in receiving any lawful material that would be affected; and (iii) the severity of harmfulness of the content, including whether or not the content is priority illegal content.

This is effectively known as shadowbanning. The content is not removed, but it is impossible for that content to surface organically.

Fast Removal of Content

The document goes on to, again, say that content should be removed quickly:

The Act requires that all U2U and search services must:
• Have easy to use complaints process, which allow for users to make complaints, such as: complaints about the presence of illegal content; appeals where content may have been incorrectly identified as illegal; complaints about reporting function; complaints about a service not complying with its duties; complaints about the use of proactive technology in a way that is inconsistent with published terms of service; and
• take appropriate action in response to complaints.

The immediate question I have to this is that of time windows. What is “appropriate action”? Is it a response withing a week? A month? An hour? What is considered “appropriate”? Once again, the passage doesn’t say.

Expectations to Predict Exposure to Harmful Content

Ofcom seems to think it’s reasonable to expect platforms to predict whenever an automated system might expose someone to something harmful:

We are making the following proposals for U2U services which already carry out on-platform tests of their recommender systems and that identify as medium or high risk for at least two specified harms:
• Services should, when they undertake on-platform tests, collect safety metrics that will allow them to assess whether the changes are likely to increase user exposure to illegal content.

I’ve read this several times and I’m not even sure how this can be carried out in practice. A big question I would have is, what if through testing that no additional risk was found, yet when it was implemented, someone was somehow exposed to something “harmful” through that new system? Does that mean the platform is somehow at fault in this scenario? Are the platforms to be reprimanded in some way? Again, I’m completely mystified as to how that would even work in practice.

General Thoughts

I can personally only seeing this whole system hitting a brick wall sooner or later. Ofcom is clearly asking for the moon here. They want platforms to predict how people can be exposed to “harmful” content. When “harmful” can mean just about anything, the platforms, at best, will be chasing moving targets. At what point does these expectations become unreasonable? Ofcom can clearly redefine expectations at any time which doesn’t really make things any better. When I read these headache inducing documents, all I get out of them is a regulator wanting platforms to wave a magic wand and make bad stuff go away. Real life… doesn’t work that way.

Eventually, there will be questions on what is reasonable for a business to operate in a country. That alone could very easily sink parts, if not, all of this ridiculous law. As others have said, it’s a pretty safe bet that this law will collapse under its own weight. In reading these documents, those comments do have a lot of merit.

Drew Wilson on Twitter: @icecube85 and Facebook.

Leave a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Scroll to Top