UK Online Safety Regulations and impact on Forums

I will see if I can find out which document/section it is in later, but there is a bit about preserving the freedom of expression for UK users which implies they can complain if we delete their posts and then we have to respond to that complaint in a timely fashion.

I am fairly sure I read somewhere that this had to be held for 3 years too unless I dreamt that. There really is so much to take in and its scattered across multiple documents.

Xenforo obviously already has a "reason for deletion" field which will go some way towards this, but we might have to make that compulsory, and also ensure posts are only ever soft-deleted.

BARF! So glad I don't live or operate in that market. Holy crap.
 
Where in the regulations does it say that?
You can see that included in my forum owner's template guide at https://github.com/centminmod/xenforo-online-safety-act-uk-template

Also if you have Google Gemini and load up the act pdf/doc, you can query against it

prompt
Where in the act does it mention site requirements to record the actions it takes against reported illegal content from the 17 categories we have to assess for
Gemini queried response
The duty to keep records of risk assessment is mentioned in section 23(2) and (10).

1735419698963.webp
 
Which bit?
section 10 of template under 'Record keeping' https://github.com/centminmod/xenfo...template?tab=readme-ov-file#10-record-keeping :)

We will maintain records of complaints, investigations, actions taken, and risk assessments to the best of our ability, in compliance with data protection regulations and the OSA, and with due regard to Ofcom's Codes of Practice, particularly regarding the complaints and appeals processes. (ICU D1 to ICU D14).. However, our record-keeping capacity is limited by our volunteer nature, lack of dedicated systems, and reliance on basic tools. We will prioritize record-keeping for cases involving illegal content and content harmful to children, particularly for any complaints related to our enforcement of our terms of service. We will use Ofcom's guidance on record-keeping to inform our practices, but our implementation will necessarily be limited by our resources.

We maintain records of:

  • All complaints received, including details and actions taken.
  • Risk assessments conducted and their outcomes.
  • Actions taken in response to illegal content reports.
These records are kept securely and are accessible for review as required by the OSA and data protection regulations. Due to our volunteer-run nature, record-keeping is managed using basic tools, prioritizing cases involving illegal content and child safety.
and section 14 of template for User Redress process https://github.com/centminmod/xenfo...tab=readme-ov-file#14-user-redress-mechanisms
 
Last edited:
OK, but you were talking about keeping records about post deletions as required by UK regulations. That's all I was asking about. Those links don't seem to be UK regulations.
The template I made is based off the UK Safety Act's regulations and requirements. I just did all the heavy lifting for the research part when I wrote the template for forum owners to base off to ensure they are meeting UK Safety Act's requirements :) Use the template as a base and modify it for your specific forum and then develop the procedures and practices. The template already has an example 17 category assessment, table of assessment forum owners can use as starter template from section 7.5 https://github.com/centminmod/xenfo...ab=readme-ov-file#75-risk-assessment-findings :D
 
But freedom of expression is nothing to do with posts on a website, I don't follow that implication.. It's my site I can delete what content I like. I think maybe you are reading too much into the phrase "freedom of expression"

Maybe, but I am going off ICU 9.19 and 9.20 (safeguards for freedom of expression) which then say there is a right of appeal against removed posts under the complaints process we must have which is detailed in Section D.

I think you are right that you can delete content that isn't allowed by our own T&Cs whatever the act may say, but if someone complains, we need to demonstrate that this is the reason it was deleted and not because it was suspected to be or was "illegal" content. Hence storing the reason when we actually remove such posts.
 
there is a right of appeal against removed posts under the complaints process we must have which is detailed in Section D.
So what you are implying is that all those sites that close down because they are scared of these regulations would need to write down the reason for every post which is removed due to the site closing down?
 
So what you are implying is that all those sites that close down because they are scared of these regulations would need to write down the reason for every post which is removed due to the site closing down?

I am not implying that at all. If your site is closed down it is no longer a U2U site and no longer subject to these rules so you wouldn't be breaking them and Ofcom wouldn't be interested in the slightest.

This is all about what we need to do to keep sites open and in line with the new law.
 
I wonder what metrics Ofcom deem acceptable for their rule about 700,000 monthly UK users which is when determining whether CSAM and URL detection is required?

They do say it is ALL visitors to the site, including those that read and don't contribute - so its not related directly to the member base.

If we go off Google Analytics, we are above this figure, and will fall foul of that part. But I don't believe in reality we are even close to it and most of those we see are almost certainly repeat visitors, this just isn't detected anymore as tracking individuals is so much harder since GDPR etc.
 
I asked these subject to GPT

Is there any admin here not following these recommendations?
Thanks that actually seemed very useful. I have used it as a basis for an addendum to our rules, and I think the last section here goes some way towards a risk assessment. I'm just waiting to find out what we may need or want to o in regard to age verification and searches.

In the context of the UK Online Safety Regulations, "illegal content" and "potential harm to users, especially children" refer to specific types of content and behaviors that we will address to protect users, particularly vulnerable individuals such as children.
This refers to content that directly violates the law. It includes, but is not limited to:

  1. Child Sexual Abuse Material (CSAM): Any form of content that depicts or promotes the sexual abuse or exploitation of children.
  2. Terrorist Content: Content promoting terrorism, including terrorist attacks, extremist ideologies, and recruitment materials for terrorist groups.
  3. Hate Speech: Content that promotes violence or discrimination based on characteristics such as race, religion, gender, sexual orientation, or disability.
  4. Fraudulent and Scamming Content: Content intended to deceive individuals for financial gain, such as phishing schemes, fraudulent offers, and fake product promotions.
  5. Intimate Image Abuse: Content involving the sharing or distribution of intimate images or videos without consent, often referred to as "revenge porn."
  6. Incitement to Violence: Any content that promotes or encourages violence, self-harm, or criminal activity.
Under the Online Safety Regulations, platforms are required to take measures to prevent such illegal content from being shared, and must have systems in place for users to report and remove it promptly.
This refers to content or behaviors that may not necessarily be illegal but still pose significant risks to users, particularly children. These may include:

  1. Cyberbullying and Harassment: Online bullying or harassment, which can lead to emotional distress, depression, or even self-harm, particularly in young people.
  2. Exposure to Harmful or Disturbing Content: Content that could have a negative psychological effect on children, such as graphic violence, self-harm tutorials, or explicit material not related to sexual abuse but still harmful to a child's mental or emotional well-being.
  3. Misinformation and Disinformation: False or misleading content, especially around sensitive topics like health, that may lead children to make dangerous decisions or develop incorrect beliefs. This can include anything that may go against national or government safety advice in regard to pandemics.
  4. Addiction and Excessive Use: Platforms that encourage excessive screen time or addiction to certain types of content, such as gaming or social media, which can interfere with a child's development, education, and well-being.
  5. Predatory Behavior: Online grooming or manipulation by adults trying to exploit or abuse children. This may include predatory messaging, inappropriate content, or online activities aimed at developing a relationship with a minor for harmful purposes.
To address illegal content and minimize harm, especially to children, we carry out the following measures:
  • Identify and Block Illegal Content:
    • All posts by new members are flagged as requiring moderation before being made public. This helps us to identify users who have not joined with a "legitimate interest" in the forum topic.
    • We currently use human moderators, to detect and prevent the sharing of illegal or harmful content.
    • Our report system allows any registered user to report any content which infringes either the Cafesaxophone regulations or contains any illegal or harmful content and ensure it is removed in a timely manner.
    • We regularly monitor uploaded file attachments to Direct Messages (aka DMs or PMs) but moderators do not monitor the text or linked content therein, unless there is a specific reason to do so (e.g. suspected abuse, illegal or harmful use of the system). However our report system extends to Direct Messages so any participant in a DM may flag DM content via the report system.
  • Implement Age Verification:
    • Members are required to declare their age range (13 to 17 or over 18) at the time of registering. Minors have no permissions to either send or receive direct messages.
    • All content on the forum must be suitable of all age groups.
By taking these actions, we minimize the risks associated with illegal content and protect users, particularly vulnerable groups such as children, from harm.
 
Thanks that actually seemed very useful. I have used it as a basis for an addendum to our rules, and I think the last section here goes some way towards a risk assessment. I'm just waiting to find out what we may need or want to o in regard to age verification and searches.

From what I have read about searches, I don't see it being a problem for most forums.

The main bit around searches is for "large" sites which most of us aren't in their definition. The bit that applies to the rest of us is about searches external to our domain, but specifically excludes the liability when you use an external supplier, such as a Google Searchbox

Age verification we will have to wait and see...
 
Age verification we will have to wait and see...
I remember years ago there was a thing on a vBulletin forum I moderated, COPPA. This meant that any kids going needed a letter from a parent. But there was nothing to stop them lying about their age, it's what people do.
 
I remember years ago there was a thing on a vBulletin forum I moderated, COPPA. This meant that any kids going needed a letter from a parent. But there was nothing to stop them lying about their age, it's what people do.
And we cannot verify their age legally. So there is no clear action for this if they lie for their age.
 
Back
Top Bottom