https://www.wired.com/story/amazon-ai-cameras-emotions-uk-train-passengers/ Skip to main content Open Navigation Menu To revisit this article, visit My Profile, then View saved stories. Close Alert WIRED Amazon-Powered AI Cameras Used to Detect Emotions of Unwitting UK Train Passengers * Security * Politics * Gear * Backchannel * Business * Science * Culture * Ideas * Merch More Search * Security * Politics * Gear * Backchannel * Business * Science * Culture * Ideas * Merch * Podcasts * Video * Newsletters * Magazine * Events * WIRED Insider * WIRED Consulting * Jobs * Coupons Matt Burgess Security Jun 17, 2024 3:00 AM Amazon-Powered AI Cameras Used to Detect Emotions of Unwitting UK Train Passengers CCTV cameras and AI are being combined to monitor crowds, detect bike thefts, and spot trespassers. A photo of many travelers walking around the concourse of Waterloo Station in London UK. Photograph: Peter Dazeley/Getty Images Save Save Thousands of people catching trains in the United Kingdom likely had their faces scanned by Amazon software as part of widespread artificial intelligence trials, new documents reveal. The image recognition system was used to predict travelers' age, gender, and potential emotions--with the suggestion that the data could be used in advertising systems in the future. During the past two years, eight train stations around the UK--including large stations such as London's Euston and Waterloo, Manchester Piccadilly, and other smaller stations--have tested AI surveillance technology with CCTV cameras with the aim of alerting staff to safety incidents and potentially reducing certain types of crime. The extensive trials, overseen by rail infrastructure body Network Rail, have used object recognition--a type of machine learning that can identify items in videofeeds--to detect people trespassing on tracks, monitor and predict platform overcrowding, identify antisocial behavior ("running, shouting, skateboarding, smoking"), and spot potential bike thieves. Separate trials have used wireless sensors to detect slippery floors, full bins, and drains that may overflow. The scope of the AI trials, elements of which have previously been reported, was revealed in a cache of documents obtained in response to a freedom of information request by civil liberties group Big Brother Watch. "The rollout and normalization of AI surveillance in these public spaces, without much consultation and conversation, is quite a concerning step," says Jake Hurfurt, the head of research and investigations at the group. The AI trials used a combination of "smart" CCTV cameras that can detect objects or movements from images they capture and older cameras that have their videofeeds connected to cloud-based analysis. Between five and seven cameras or sensors were included at each station, note the documents, which are dated from April 2023. One spreadsheet lists 50 possible AI use cases, although not all of these appear to have been used in the tests. One station, London Euston, was due to trial a "suicide risk" detection system, but the documents say the camera failed and staff did not see need to replace it due to the station being a "terminus" station. Hurfurt says the most "concerning" element of the trials focused on "passenger demographics." According to the documents, this setup could use images from the cameras to produce a "statistical analysis of age range and male/female demographics," and is also able to "analyze for emotion" such as "happy, sad, and angry." The images were captured when people crossed a "virtual tripwire" near ticket barriers, and were sent to be analyzed by Amazon's Rekognition system, which allows face and object analysis. It could allow passenger "satisfaction" to be measured, the documents say, noting that "this data could be utilized to maximum advertising and retail revenue." AI researchers have frequently warned that using the technology to detect emotions is "unreliable," and some say the technology should be banned due to the difficulty of working out how someone may be feeling from audio or video. In October 2022, the UK's data regulator, the Information Commissioner's Office, issued a public statement warning against the use of emotion analysis, saying the technologies are "immature" and "they may not work yet, or indeed ever." Most Popular * The Titan Submersible Disaster Shocked the World. The Inside Story Is More Disturbing Than Anyone Imagined Backchannel The Titan Submersible Disaster Shocked the World. The Inside Story Is More Disturbing Than Anyone Imagined By Mark Harris * The West Coast's Fanciest Stolen Bikes Are Getting Trafficked by One Mastermind in Jalisco, Mexico Backchannel The West Coast's Fanciest Stolen Bikes Are Getting Trafficked by One Mastermind in Jalisco, Mexico By Christopher Solomon * The 18 Best Movies on Amazon Prime Right Now Culture The 18 Best Movies on Amazon Prime Right Now By Matt Kamen * Apple Intelligence Won't Work on Hundreds of Millions of iPhones& -but Maybe It Could Gear Apple Intelligence Won't Work on Hundreds of Millions of iPhones--but Maybe It Could By Andrew Williams * Network Rail did not answer questions about the trials sent by WIRED, including questions about the current status of AI usage, emotion detection, and privacy concerns. "We take the security of the rail network extremely seriously and use a range of advanced technologies across our stations to protect passengers, our colleagues, and the railway infrastructure from crime and other threats," a Network Rail spokesperson says. "When we deploy technology, we work with the police and security services to ensure that we're taking proportionate action, and we always comply with the relevant legislation regarding the use of surveillance technologies." It is unclear how widely the emotion detection analysis was deployed, with the documents at times saying the use case should be "viewed with more caution" and reports from stations saying it is "impossible to validate accuracy." However, Gregory Butler, the CEO of data analytics and computer vision company Purple Transform, which has been working with Network Rail on the trials, says the capability was discontinued during the tests and that no images were stored when it was active. The Network Rail documents about the AI trials describe multiple use cases involving the potential for the cameras to send automated alerts to staff when they detect certain behavior. None of the systems use controversial face recognition technology, which aims to match people's identities to those stored in databases. "A primary benefit is the swifter detection of trespass incidents," says Butler, who adds that his firm's analytics system, SiYtE, is in use at 18 sites, including train stations and alongside tracks. In the past month, Butler says, there have been five serious cases of trespassing that systems have detected at two sites, including a teenager collecting a ball from the tracks and a man "spending over five minutes picking up golf balls along a high-speed line." At Leeds train station, one of the busiest outside of London, there are 350 CCTV cameras connected to the SiYtE platform, Butler says. "The analytics are being used to measure people flow and identify issues such as platform crowding and, of course, trespass--where the technology can filter out track workers through their PPE uniform," he says. "AI helps human operators, who cannot monitor all cameras continuously, to assess and address safety risks and issues promptly." The Network Rail documents claim that cameras used at one station, Reading, allowed police to speed up investigations into bike thefts by being able to pinpoint bikes in the footage. "It was established that, whilst analytics could not confidently detect a theft, but they could detect a person with a bike," the files say. They also add that new air quality sensors used in the trials could save staff time from manually conducting checks. One AI instance uses data from sensors to detect "sweating" floors, which have become slippery with condensation, and alert staff when they need to be cleaned. While the documents detail some elements of the trials, privacy experts say they are concerned about the overall lack of transparency and debate about the use of AI in public spaces. In one document designed to assess data protection issues with the systems, Hurfurt from Big Brother Watch says there appears to be a "dismissive attitude" toward people who may have privacy concerns. One question asks: "Are some people likely to object or find it intrusive?" A staff member writes: "Typically, no, but there is no accounting for some people." At the same time, similar AI surveillance systems that use the technology to monitor crowds are increasingly being used around the world. During the Paris Olympic Games in France later this year, AI video surveillance will watch thousands of people and try to pick out crowd surges, use of weapons, and abandoned objects. "Systems that do not identify people are better than those that do, but I do worry about a slippery slope," says Carissa Veliz, an associate professor in psychology at the Institute for Ethics in AI, at the University of Oxford. Veliz points to similar AI trials on the London Underground that had initially blurred faces of people who might have been dodging fares, but then changed approach, unblurring photos and keeping images for longer than was initially planned. "There is a very instinctive drive to expand surveillance," Veliz says. "Human beings like seeing more, seeing further. But surveillance leads to control, and control to a loss of freedom that threatens liberal democracies." You Might Also Like ... * Inside the biggest FBI sting operation in history * The WIRED AI Elections Project: Tracking more than 60 global elections * Ecuador is literally powerless in the face of drought * Rest assured: Here are the best mattresses you can buy online [undefined] Matt Burgess is a senior writer at WIRED focused on information security, privacy, and data regulation in Europe. He graduated from the University of Sheffield with a degree in journalism and now lives in London. Send tips to Matt_Burgess@wired.com. Senior writer * Topicsartificial intelligenceprivacymachine learning Read More AI Is Your Coworker Now. Can You Trust It? AI Is Your Coworker Now. Can You Trust It? Generative AI tools such as OpenAI's ChatGPT and Microsoft's Copilot are becoming part of everyday business life. But they come with privacy and security considerations you should know about. Kate O'Flaherty The Age of the Drone Police Is Here The Age of the Drone Police Is Here A WIRED investigation, based on more than 22 million flight coordinates, reveals the complicated truth about the first full-blown police drone program in the US--and why your city could be next. Dhruv Mehrotra A Leak of Biometric Police Data Is a Sign of Things to Come A Leak of Biometric Police Data Is a Sign of Things to Come Thousands of fingerprints and facial images linked to police in India have been exposed online. Researchers say it's a warning of what will happen as the collection of biometric data increases. Matt Burgess Microsoft's New Recall AI Tool May Be a 'Privacy Nightmare' Microsoft's New Recall AI Tool May Be a 'Privacy Nightmare' Plus: US surveillance reportedly targets pro-Palestinian protesters, the FBI arrests a man for AI-generated CSAM, and stalkerware targets hotel computers. Dell Cameron Cops Are Just Trolling Cybercriminals Now Cops Are Just Trolling Cybercriminals Now Police are using subtle psychological operations against ransomware gangs to sow distrust in their ranks--and trick them into emerging from the shadows. Matt Burgess US Official Warns a Cell Network Flaw Is Being Exploited for Spying US Official Warns a Cell Network Flaw Is Being Exploited for Spying Plus: Three arrested in North Korean IT workers fraud ring, Tesla staffers shared videos from owners' cars, and more. Andy Greenberg The Lords of Silicon Valley Are Thrilled to Present a 'Handheld Iron Dome' The Lords of Silicon Valley Are Thrilled to Present a 'Handheld Iron Dome' ZeroMark wants to build a system that will let soldiers easily shoot a drone out of the sky with the weapons they're already carrying--and venture capital firm a16z is betting the startup can pull it off. Matthew Gault Eventbrite Promoted Illegal Opioid Sales to People Searching for Addiction Recovery Help Eventbrite Promoted Illegal Opioid Sales to People Searching for Addiction Recovery Help A WIRED investigation found thousands of Eventbrite posts selling escort services and drugs like Xanax and oxycodone--some of which the company's algorithm recommended alongside addiction recovery events. Matt Burgess WIRED WIRED is where tomorrow is realized. It is the essential source of information and ideas that make sense of a world in constant transformation. The WIRED conversation illuminates how technology is changing every aspect of our lives--from culture to business, science to design. The breakthroughs and innovations that we uncover lead to new ways of thinking, new connections, and new industries. More From WIRED * Subscribe * Newsletters * FAQ * WIRED Staff * Editorial Standards * Archive * RSS * Accessibility Help Reviews and Guides * Reviews * Buying Guides * Mattresses * Electric Bikes * Fitness Trackers * Streaming Guides * Coupons * Submit an Offer * Become a Partner * Coupons Contact * Code Guarantee * Advertise * Contact Us * Customer Care * Jobs * Press Center * Conde Nast Store * User Agreement * Privacy Policy & Cookie Statement * Your California Privacy Rights (c) 2024 Conde Nast. All rights reserved. WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Conde Nast. Ad Choices Select international site United States * Italia * Japon * Czech Republic & Slovakia * * * * * *