Category: Science
Children hacking their own schools for ‘fun’, watchdog warns
More than half of school and college cyber attacks and data breaches are being carried out by their own pupils, the Information Commissioner’s Office (ICO) has revealed. School children and college students are carrying out hacks and accessing private data for fun or as part of dares, the ICO says, calling it a “worrying trend”. It is warning teachers that they are failing to understand and recognise what it calls the “insider threat” pupils pose. “What starts out as a dare, a challenge, a bit of fun in a school setting can ultimately lead to children taking part in damaging attacks on organisations or critical infrastructure,” said Heather Toomey, Principal Cyber Specialist at the ICO. t comes amid a spate of high profile cyber-attacks, affecting firms including M&S and Jaguar Land Rover, in which teenage hackers have been implicated. Since 2022, the ICO has investigated 215 hacks and breaches in education settings and says 57% were carried out by children. According to the new data, almost a third of the breaches involved students illegally logging into staff computer systems by guessing passwords or stealing details from teachers. In one incident, a seven-year-old was involved in a data breach and subsequently referred to the National Crime Agency’s Cyber Choices programme to help them understand the seriousness of their actions. The ICO did not give details on the nature of that breach. In another incident three Year 11 students aged 15 or 16 unlawfully accessed school databases containing the personal information of more than 1,400 students. The pupils used hacking tools downloaded from the internet to break passwords and security protocols. When questioned, they said they were interested in cyber security and wanted to test their skills and knowledge. Another example the ICO gave is of a student illegally logging into their college’s databases with a teachers’ details to change or delete personal information belonging to more than 9,000 staff, students and applicants. The system stored personal information such as name and home address, school records, health data, safeguarding and pastoral logs and emergency contacts. Schools are facing an increasing number of cyber attacks, with 44% of schools reporting an attack or breach in the last year according the government’s most recent Cyber Security Breaches Survey. Youth cyber crime culture is a growing threat with linked to English-speaking teen gangs. Young or teenage alleged hackers have been arrested in the UK and the US in the last year for hacking campaigns against major companies including MGM Grand Casinos, TfL, Marks and Spencer and Co-op.
BBC threatens AI firm with legal action over unauthorised content use
The BBC is threatening to take legal action against an artificial intelligence (AI) firm whose chatbot the corporation says is reproducing BBC content “verbatim” without its permission. The BBC has written to Perplexity, which is based in the US, demanding it immediately stops using BBC content, deletes any it holds, and proposes financial compensation for the material it has already used. It is the first time that the BBC – one of the world’s largest news organisations – has taken such action against an AI company. In a statement, Perplexity said: “The BBC’s claims are just one more part of the overwhelming evidence that the BBC will do anything to preserve Google’s illegal monopoly.” t did not explain what it believed the relevance of Google was to the BBC’s position, or offer any further comment. The BBC’s legal threat has been made in a letter to Perplexity’s boss Aravind Srinivas. “This constitutes copyright infringement in the UK and breach of the BBC’s terms of use,” the letter says. The BBC also cited its research published earlier this year that found four popular AI chatbots – including Perplexity AI – were inaccurately summarising news stories, including some BBC content. Pointing to findings of significant issues with representation of BBC content in some Perplexity AI responses analysed, it said such output fell short of BBC Editorial Guidelines around the provision of impartial and accurate news. “It is therefore highly damaging to the BBC, injuring the BBC’s reputation with audiences – including UK licence fee payers who fund the BBC – and undermining their trust in the BBC,” it added. Web scraping scrutiny Chatbots and image generators that can generate content response to simple text or voice prompts in seconds have swelled in popularity since OpenAI launched ChatGPT in late 2022. But their rapid growth and improving capabilities has prompted questions about their use of existing material without permission. Much of the material used to develop generative AI models has been pulled from a massive range of web sources using bots and crawlers, which automatically extract site data. The rise in this activity, known as web scraping, recently prompted British media publishers to join calls by creatives for the UK government to uphold protections around copyrighted content. In response to the BBC’s letter, the Professional Publishers Association (PPA) – which represents over 300 media brands – said it was “deeply concerned that AI platforms are currently failing to uphold UK copyright law.” It said bots were being used to “illegally scrape publishers’ content to train their models without permission or payment.” It added: “This practice directly threatens the UK’s £4.4 billion publishing industry and the 55,000 people it employs.” Many organisations, including the BBC, use a file called “robots.txt” in their website code to try to block bots and automated tools from extracting data en masse for AI. It instructs bots and web crawlers to not access certain pages and material, where present. But compliance with the directive remains voluntary and, according to some reports, bots do not always respect it. The BBC said in its letter that while it disallowed two of Perplexity’s crawlers, the company “is clearly not respecting robots.txt”. Mr Srinivas denied accusations that its crawlers ignored robots.txt instructions in an interview with Fast Company last June. Perplexity also says that because it does not build foundation models, it does not use website content for AI model pre-training. ‘Answer engine’ The company’s AI chatbot has become a popular destination for people looking for answers to common or complex questions, describing itself as an “answer engine”. It says on its website that it does this by “searching the web, identifying trusted sources and synthesising information into clear, up-to-date responses”. It also advises users to double check responses for accuracy – a common caveat accompanying AI chatbots, which can be known to state false information in a matter of fact, convincing way. In January Apple suspended an AI feature that generated false headlines for BBC News app notifications when summarising groups of them for iPhones users, following BBC complaints.
AI firm Anthropic agrees to pay authors $1.5bn to settle piracy lawsuit
Artificial intelligence (AI) firm Anthropic has agreed to pay $1.5bn (£1.11bn) to settle a class action lawsuit filed by authors who said the company stole their work to train its AI models. The deal, which requires the approval of US District Judge William Alsup, would be the largest publicly-reported copyright recovery in history, according to lawyers for the authors. It comes two months after Judge Alsup found that using books to train AI did not violate US copyright law, but ordered Anthropic to stand trial over its use of pirated material. Anthropic said on Friday that the settlement would “resolve the plaintiffs’ remaining legacy claims.” The settlement comes as other big tech companies including ChatGPT-maker OpenAI, Microsoft, and Instagram-parent Meta face lawsuits over similar alleged copyright violations. Anthropic, with its Claude chatbot, has long pitched itself as the ethical alternative among its competitors. “We remain committed to developing safe AI systems that help people and organisations extend their capabilities, advance scientific discovery, and solve complex problems,” said Aparna Sridhar, Deputy General Counsel at Anthropic which is backed by both Amazon and Google-parent Alphabet. The lawsuit was filed against Anthropic last year by best-selling mystery thriller writer Andrea Bartz, whose novels include We Were Never Here, along with The Good Nurse author Charles Graeber and The Feather Thief author Kirk Wallace Johnson. They accused the company of stealing their work to train its Claude AI chatbot in order to build a multi-billion dollar business. The company holds more than seven million pirated books in a central library, according to Judge Alsup’s June decision, and faced up to $150,000 in damages per copyrighted work. His ruling was among the first to weigh in on how Large Language Models (LLMs) can legitimately learn from existing material. It found that Anthropic’s use of the authors’ books was “exceedingly transformative” and therefore allowed under US law. But he rejected Anthropic’s request to dismiss the case. Anthropic was set to stand trial in December over its use of pirated copies to build its library of material. Plaintiffs lawyers called the settlement announced Friday “the first of its kind in the AI era.” “It will provide meaningful compensation for each class work and sets a precedent requiring AI companies to pay copyright owners,” said lawyer Justin Nelson representing the authors. “This settlement sends a powerful message to AI companies and creators alike that taking copyrighted works from these pirate websites is wrong.” Questions about the intersection of AI development and copyright law are increasingly landing in the courts. “You need that fresh training data from human beings,” said Alex Yang, Professor of Management Science and Operations at London Business School. “If you want to grant more copyright to AI-created content, you must also strengthen mechanisms that compensate humans for their original contributions.”
In groundbreaking study, researchers publish brain map showing how decisions are made
Neuroscientists from 22 labs joined forces in an unprecedented international partnership to produce a landmark achievement: a neural map that shows activity across the entire brain during decision-making. The data, gathered from 139 mice, encompass activity from more than 600,000 neurons in 279 areas of the brain — about 95% of the brain in a mouse. This map is the first to provide a complete picture of what happens across the brain as a decision is made. “They have created the largest dataset anyone has ever imagined at this scale,” said Dr. Paul W. Glimcher, chair of the department of neuroscience and physiology and director of the Neuroscience Institute at New York University’s Grossman School of Medicine, of the researchers. In the field of neuroscience, “this is going to go down in history as a major event,” Glimcher, who was not involved in the new research, told CNN. To construct the map, researchers first created a standardized procedure to be shared across laboratories and then tracked neural activity in mice as the rodents responded to visual prompts, integrating all the data gathered by each lab. Seven years in the making and presented in two studies, the findings were published on September 3 in the journal Nature. “There are basically two big results, which is why we have two papers,” said Alexandre Pouget, a full professor in basic neuroscience at the University of Geneva. One study outlined the widespread distribution of electrical activity related to decision-making. The other used the data to evaluate how expectations shape choices. Pouget is a coauthor of the first study and senior author of the second. “We started from scratch,” he told CNN. “Nobody had ever attempted to do something like this before.”
