13 Feb, 2026
3 mins read

What the Online Safety Act is – and how to keep children safe online

The way people in the UK might navigate the internet is changing. Under the Online Safety Act, platforms must take action – such as carrying out age checks – to stop children seeing illegal and harmful material from July. Services face large fines if they fail to comply with UK’s sweeping online safety rules. But what do they mean for children? Here’s what you need to know. What is the Online Safety Act and how will it protect children? The Online Safety Act’s central aim is to make the internet safer for people in the UK, especially children. It is a set of laws and duties that online platforms must follow, being implemented and enforced by Ofcom, the media regulator. Under its Children’s Codes, platforms must prevent young people from encountering harmful content relating to suicide, self-harm, eating disorders and pornography from 25 July. This will see some services, notably porn sites, start checking the age of UK users. Ofcom’s rules are also designed to protect children from misogynistic, violent, hateful or abusive material, online bullying and dangerous challenges. Firms which wish to continue operating in the UK must adopt measures including: Failure to comply could result in businesses being fined £18m or 10% of their global revenues – whichever is higher – or their executives being jailed. In very serious cases Ofcom says it can apply for a court order to prevent the site or app from being available in the UK. What else is in the Online Safety Act? The bill also requires firms to show they are committed to removing illegal content, including: The Act has also created new offences, such as: Why has it been criticised? A number of campaigners want to see even stricter rules for tech firms, and some want under-16s banned from social media completely. Ian Russell, chairman of the Molly Rose Foundation – which was set up in memory of his daughter who took her own life aged 14 – said he was “dismayed by the lack of ambition” in Ofcom’s codes.

3 mins read

M&S hackers claim to be behind Jaguar Land Rover cyber attack

A group of young English-speaking hackers are claiming to be behind the cyber attack which has halted the global production lines of Jaguar Land Rover (JLR). The group is bragging about the hack on the messaging app Telegram, sharing screenshots apparently taken from inside the car maker’s IT networks. The gang is also responsible for a wave of cyber attacks on UK retailers including M&S in the spring – and are calling themselves “Scattered Lapsus$ Hunters”. “Where is my new car, Land Rover,” the hackers – who are thought to be teens – posted to taunt the company. JLR told the BBC it was aware of the claims and was investigating. In private text conversations with one of the criminals, who claims to be a spokesperson for the group, they explained how the gang allegedly accessed the car maker. It’s understood they are now trying to extort the firm for money. But the hacker would not say if they have successfully stolen private data from JLR or installed malicious software onto the company’s network. The hacker wouldn’t provide any more evidence – and these types of criminal gangs are known to exaggerate to get attention. But two images posted by the group show apparent internal instructions for troubleshooting a car charging issue and internal computer logs. One security expert has speculated the screenshots suggest the criminals have access to information they should not have. “Based on the information provided by the attackers and open source intelligence, the attack has access to JLR’s internal systems and network,” security researcher Kevin Beaumont said. A spokesperson for the Information Commissioner’s Office said: “Jaguar Land Rover has reported an incident and we are assessing the information provided.” ‘Took immediate action’ Car production at sites including the Halewood plant in Merseyside and another in Solihull have been heavily disrupted since the attack was discovered on Sunday. Staff have been sent home and JLR has said it’s working to get manufacturing back online. The company has not disclosed the nature of the attack. “We took immediate action to mitigate its impact by proactively shutting down our systems, it said in a statement. “We are now working at pace to restart our global applications in a controlled manner. “At this stage there is no evidence any customer data has been stolen but our retail and production activities have been severely disrupted.” The hackers chose the name Scattered Lapsus$ Hunters to reflect the merging of various youth-orientated cyber criminals who are all associated with a network called The Com. Earlier this year the National Crime Agency warned of the growing threat from cyber criminals in The Com. The newly named group is a mixture of hackers who have been part of the groups Shiny Hunters, Lapsus$ and Scattered Spider – all notorious young hacking groups of the last few years that emerged from The Com. The Telegram channel used by the criminals now has nearly 52,000 subscribers. The group has been bragging about hacks and sharing incomprehensible in-jokes for days. It’s the fourth such Telegram channel as previous ones have been closed down. Scattered Spider is name of a loosely linked group of hackers responsible for high profile attacks on M&S, Co-op and Harrods in April and May. In July the National Crime Agency arrested 4 people in connection to the hacks. A 20-year-old woman was arrested in Staffordshire, and three males – aged between 17 and 19 – were detained in London and the West Midlands. All have since been released on bail.

2 mins read

Children hacking their own schools for ‘fun’, watchdog warns

More than half of school and college cyber attacks and data breaches are being carried out by their own pupils, the Information Commissioner’s Office (ICO) has revealed. School children and college students are carrying out hacks and accessing private data for fun or as part of dares, the ICO says, calling it a “worrying trend”. It is warning teachers that they are failing to understand and recognise what it calls the “insider threat” pupils pose. “What starts out as a dare, a challenge, a bit of fun in a school setting can ultimately lead to children taking part in damaging attacks on organisations or critical infrastructure,” said Heather Toomey, Principal Cyber Specialist at the ICO. t comes amid a spate of high profile cyber-attacks, affecting firms including M&S and Jaguar Land Rover, in which teenage hackers have been implicated. Since 2022, the ICO has investigated 215 hacks and breaches in education settings and says 57% were carried out by children. According to the new data, almost a third of the breaches involved students illegally logging into staff computer systems by guessing passwords or stealing details from teachers. In one incident, a seven-year-old was involved in a data breach and subsequently referred to the National Crime Agency’s Cyber Choices programme to help them understand the seriousness of their actions. The ICO did not give details on the nature of that breach. In another incident three Year 11 students aged 15 or 16 unlawfully accessed school databases containing the personal information of more than 1,400 students. The pupils used hacking tools downloaded from the internet to break passwords and security protocols. When questioned, they said they were interested in cyber security and wanted to test their skills and knowledge. Another example the ICO gave is of a student illegally logging into their college’s databases with a teachers’ details to change or delete personal information belonging to more than 9,000 staff, students and applicants. The system stored personal information such as name and home address, school records, health data, safeguarding and pastoral logs and emergency contacts. Schools are facing an increasing number of cyber attacks, with 44% of schools reporting an attack or breach in the last year according the government’s most recent Cyber Security Breaches Survey. Youth cyber crime culture is a growing threat with linked to English-speaking teen gangs. Young or teenage alleged hackers have been arrested in the UK and the US in the last year for hacking campaigns against major companies including MGM Grand Casinos, TfL, Marks and Spencer and Co-op.

4 mins read

BBC threatens AI firm with legal action over unauthorised content use

The BBC is threatening to take legal action against an artificial intelligence (AI) firm whose chatbot the corporation says is reproducing BBC content “verbatim” without its permission. The BBC has written to Perplexity, which is based in the US, demanding it immediately stops using BBC content, deletes any it holds, and proposes financial compensation for the material it has already used. It is the first time that the BBC – one of the world’s largest news organisations – has taken such action against an AI company. In a statement, Perplexity said: “The BBC’s claims are just one more part of the overwhelming evidence that the BBC will do anything to preserve Google’s illegal monopoly.” t did not explain what it believed the relevance of Google was to the BBC’s position, or offer any further comment. The BBC’s legal threat has been made in a letter to Perplexity’s boss Aravind Srinivas. “This constitutes copyright infringement in the UK and breach of the BBC’s terms of use,” the letter says. The BBC also cited its research published earlier this year that found four popular AI chatbots – including Perplexity AI – were inaccurately summarising news stories, including some BBC content. Pointing to findings of significant issues with representation of BBC content in some Perplexity AI responses analysed, it said such output fell short of BBC Editorial Guidelines around the provision of impartial and accurate news. “It is therefore highly damaging to the BBC, injuring the BBC’s reputation with audiences – including UK licence fee payers who fund the BBC – and undermining their trust in the BBC,” it added. Web scraping scrutiny Chatbots and image generators that can generate content response to simple text or voice prompts in seconds have swelled in popularity since OpenAI launched ChatGPT in late 2022. But their rapid growth and improving capabilities has prompted questions about their use of existing material without permission. Much of the material used to develop generative AI models has been pulled from a massive range of web sources using bots and crawlers, which automatically extract site data. The rise in this activity, known as web scraping, recently prompted British media publishers to join calls by creatives for the UK government to uphold protections around copyrighted content. In response to the BBC’s letter, the Professional Publishers Association (PPA) – which represents over 300 media brands – said it was “deeply concerned that AI platforms are currently failing to uphold UK copyright law.” It said bots were being used to “illegally scrape publishers’ content to train their models without permission or payment.” It added: “This practice directly threatens the UK’s £4.4 billion publishing industry and the 55,000 people it employs.” Many organisations, including the BBC, use a file called “robots.txt” in their website code to try to block bots and automated tools from extracting data en masse for AI. It instructs bots and web crawlers to not access certain pages and material, where present. But compliance with the directive remains voluntary and, according to some reports, bots do not always respect it. The BBC said in its letter that while it disallowed two of Perplexity’s crawlers, the company “is clearly not respecting robots.txt”. Mr Srinivas denied accusations that its crawlers ignored robots.txt instructions in an interview with Fast Company last June. Perplexity also says that because it does not build foundation models, it does not use website content for AI model pre-training. ‘Answer engine’ The company’s AI chatbot has become a popular destination for people looking for answers to common or complex questions, describing itself as an “answer engine”. It says on its website that it does this by “searching the web, identifying trusted sources and synthesising information into clear, up-to-date responses”. It also advises users to double check responses for accuracy – a common caveat accompanying AI chatbots, which can be known to state false information in a matter of fact, convincing way. In January Apple suspended an AI feature that generated false headlines for BBC News app notifications when summarising groups of them for iPhones users, following BBC complaints.

2 mins read

Judge backs AI firm over use of copyrighted books

A US judge has ruled that using books to train artificial intelligence (AI) software is not a violation of US copyright law. The decision came out of a lawsuit brought last year against AI firm Anthropic by three authors, including best-selling mystery thriller writer Andrea Bartz, who accused it of stealing her work to train its Claude AI model and build a multi-billion dollar business. In his ruling, Judge William Alsup said Anthropic’s use of the authors’ books was “exceedingly transformative” and therefore allowed under US law. But he rejected Anthropic’s request to dismiss the case, ruling the firm would have to stand trial over its use of pirated copies to build its library of material. Bringing the lawsuit alongside Ms Bartz, whose novels include We Were Never Here and The Last Ferry Out, were non-fiction writers Charles Graeber, author of The Good Nurse: A True Story of Medicine, Madness and Murder and Kirk Wallace Johnson who wrote The Feather Thief. Anthropic, a firm backed by Amazon and Google’s parent company, Alphabet, could face up to $150,000 in damages per copyrighted work. The firm holds more than seven million pirated books in a “central library” according to the judge. The ruling is among the first to weigh in on a question that is the subject of numerous legal battles across the industry – how Large Language Models (LLMs) can legitimately learn from existing material. “Like any reader aspiring to be a writer, Anthropic’s LLMs trained upon works, not to race ahead and replicate or supplant them — but to turn a hard corner and create something different,” Judge Alsup wrote. “If this training process reasonably required making copies within the LLM or otherwise, those copies were engaged in a transformative use,” he said. He noted that the authors did not claim that the training led to “infringing knockoffs” with replicas of their works being generated for users of the Claude tool. If they had, he wrote, “this would be a different case”. Similar legal battles have emerged over the AI industry’s use of other media and content, from journalistic articles to music and video. This month, Disney and Universal filed a lawsuit against AI image generator Midjourney, accusing it of piracy. The BBC is also considering legal action over the unauthorised use of its content. In response to the legal battles, some AI companies have responded by striking deals with creators of the original materials, or their publishers, to license material for use. Judge Alsup allowed Anthropic’s “fair use” defence, paving the way for future legal judgements.

3 mins read

AI firm Anthropic agrees to pay authors $1.5bn to settle piracy lawsuit

Artificial intelligence (AI) firm Anthropic has agreed to pay $1.5bn (£1.11bn) to settle a class action lawsuit filed by authors who said the company stole their work to train its AI models. The deal, which requires the approval of US District Judge William Alsup, would be the largest publicly-reported copyright recovery in history, according to lawyers for the authors. It comes two months after Judge Alsup found that using books to train AI did not violate US copyright law, but ordered Anthropic to stand trial over its use of pirated material. Anthropic said on Friday that the settlement would “resolve the plaintiffs’ remaining legacy claims.” The settlement comes as other big tech companies including ChatGPT-maker OpenAI, Microsoft, and Instagram-parent Meta face lawsuits over similar alleged copyright violations. Anthropic, with its Claude chatbot, has long pitched itself as the ethical alternative among its competitors. “We remain committed to developing safe AI systems that help people and organisations extend their capabilities, advance scientific discovery, and solve complex problems,” said Aparna Sridhar, Deputy General Counsel at Anthropic which is backed by both Amazon and Google-parent Alphabet. The lawsuit was filed against Anthropic last year by best-selling mystery thriller writer Andrea Bartz, whose novels include We Were Never Here, along with The Good Nurse author Charles Graeber and The Feather Thief author Kirk Wallace Johnson. They accused the company of stealing their work to train its Claude AI chatbot in order to build a multi-billion dollar business. The company holds more than seven million pirated books in a central library, according to Judge Alsup’s June decision, and faced up to $150,000 in damages per copyrighted work. His ruling was among the first to weigh in on how Large Language Models (LLMs) can legitimately learn from existing material. It found that Anthropic’s use of the authors’ books was “exceedingly transformative” and therefore allowed under US law. But he rejected Anthropic’s request to dismiss the case. Anthropic was set to stand trial in December over its use of pirated copies to build its library of material. Plaintiffs lawyers called the settlement announced Friday “the first of its kind in the AI era.” “It will provide meaningful compensation for each class work and sets a precedent requiring AI companies to pay copyright owners,” said lawyer Justin Nelson representing the authors. “This settlement sends a powerful message to AI companies and creators alike that taking copyrighted works from these pirate websites is wrong.” Questions about the intersection of AI development and copyright law are increasingly landing in the courts. “You need that fresh training data from human beings,” said Alex Yang, Professor of Management Science and Operations at London Business School. “If you want to grant more copyright to AI-created content, you must also strengthen mechanisms that compensate humans for their original contributions.”