AI
AI Surveillance on UK Trains: Unwitting Passengers Scanned for Emotions and Demographics
To go back to this article, navigate to My Profile and then click on View saved stories.
Matthew Burgess
AI Technology by Amazon Scans Faces of UK Train Riders Unaware
Documents recently uncovered show that a considerable number of UK rail passengers may have been unknowingly subject to facial recognition scans by Amazon's technology during extensive AI testing. This technology aimed to estimate the passengers' age, gender, and possible emotional states, hinting at the possibility of leveraging this information for future advertising purposes.
Over the last 24 months, a range of train stations across the United Kingdom, from major hubs like Euston and Waterloo in London and Manchester Piccadilly to smaller locales, have trialed artificial intelligence-powered surveillance systems through their CCTV networks. The purpose of this initiative is to notify employees about safety-related events and possibly decrease specific crime rates.
Comprehensive testing, managed by the railway infrastructure authority Network Rail, has employed object recognition technology—a form of machine learning capable of recognizing objects in video streams—to spot individuals unlawfully accessing the railway lines, observe and forecast when platforms may become excessively crowded, detect disorderly conduct (such as running, yelling, skateboarding, or smoking), and identify individuals likely to steal bicycles. Additionally, independent experiments have utilized wireless sensors to notice when floors become hazardous due to slipperiness, trash receptacles are filled to capacity, and drainage systems are at risk of overflowing.
The extent of the AI experiments, parts of which had already come to light, was disclosed through a set of documents acquired following a request for information under the freedom of information act by the civil rights organization Big Brother Watch. "The implementation and standardization of AI monitoring in these communal areas, with little to no public dialogue or input, is quite alarming," states Jake Hurfurt, who leads research and investigations at the organization.
The experiments with artificial intelligence leveraged a mix of advanced CCTV cameras capable of recognizing objects or movements in their footage, along with older cameras linked to cloud-based processing for analysis. Each location was equipped with approximately five to seven cameras or sensors, according to records dated April 2023. A particular spreadsheet details 50 potential applications for AI, though it seems not all were implemented in the pilot tests. At London Euston station, there was an intention to test a system designed to identify potential suicide risks. However, the records indicate that this system encountered a malfunction, and the decision was made not to replace it, partly because the station serves as a terminus.
Hurfurt points out that the aspect of the trials raising the most alarm pertains to "passenger demographics." The reports suggest that the system might employ camera images to generate a "statistical breakdown of age and gender distributions," and it has the capability to "assess emotions" like "joy, sorrow, and anger."
Photos were taken as individuals passed an invisible boundary close to the ticket gates and were then processed by Amazon's Rekognition system for facial and object detection. According to the documents, this technology could enable the assessment of passenger contentment, highlighting that such information might be leveraged to optimize advertising and retail profits.
Experts in artificial intelligence have often cautioned that employing AI for emotion detection is "not dependable," with some advocating for a prohibition on the technology because of the challenges associated with accurately interpreting someone's emotions through sound or visuals. In October 2022, the Information Commissioner’s Office in the UK made an announcement advising against the deployment of emotion analysis technologies, describing them as "underdeveloped" and expressing skepticism about their effectiveness now or in the future.
By [Your Name]
Authored by Simon
By [Your Name]
Authored by Kyle MacNeill
Network Rail refrained from responding to inquiries posed by WIRED, which included queries regarding the ongoing application of AI, the capability to recognize emotions, and issues related to privacy.
"A spokesperson for Network Rail emphasized the high priority placed on safeguarding the rail network. They explained that a variety of sophisticated technologies are utilized at stations to ensure the safety of passengers, staff, and the rail infrastructure itself from criminal activities and potential dangers. The spokesperson added that in implementing these technologies, collaboration with law enforcement and security agencies is essential to guarantee actions are appropriate and in full compliance with laws governing surveillance technology use."
The extent of the deployment of emotion detection analysis remains uncertain, as the documents occasionally suggest that the application of this technology should be approached with greater skepticism. Moreover, feedback from stations indicates that verifying the accuracy of this technology is not feasible. Nonetheless, Gregory Butler, the Chief Executive Officer of Purple Transform, a firm specializing in data analytics and computer vision collaborating with Network Rail on these pilot projects, mentions that this feature was halted during the trial phase, and assures that no images were retained while it was in operation.
Documents from Network Rail on the AI experiments outline various scenarios where cameras could automatically notify employees upon recognizing specific actions. None of the setups employ the contentious facial recognition technology designed to identify individuals by comparing their features with those in existing databases.
"Butler highlights a key advantage being the rapid identification of unauthorized entries," mentioning that their company's monitoring solution, SiYtE, is operational across 18 locations, such as railway stations and track surroundings. Recently, Butler notes, there have been five significant instances of intrusion detected at two of these locations, featuring incidents like a young person retrieving a ball from the railway lines and an individual who was caught spending an extended period gathering golf balls near a high-speed rail route.
Leeds railway station, renowned for its hustle and bustle beyond London, is equipped with 350 surveillance cameras integrated into the SiYtE system, according to Butler. He notes that the system's analytical capabilities are employed to track pedestrian movement and pinpoint problems like overcrowding on platforms and unauthorized access. Specifically, the technology distinguishes between unauthorized individuals and track workers by recognizing the workers' safety gear. Butler highlights that AI supports staff members who are unable to keep a constant watch on every camera by swiftly identifying and responding to potential safety threats and concerns.
According to documents from Network Rail, surveillance technology implemented at Reading station has enhanced police efficiency in resolving bicycle theft cases by precisely identifying bicycles in video recordings. The documentation indicates that although the analytics were not reliably able to recognize theft activities, they were successful in spotting individuals accompanied by bikes. Additionally, the introduction of air quality monitoring devices during the trials has reportedly reduced the need for manual inspections by staff. In one application of artificial intelligence, sensor data is utilized to identify floors that have become hazardous due to moisture accumulation, prompting notifications for necessary cleaning.
The papers outline certain aspects of the experiments, but privacy specialists are alarmed by the general opaqueness and the absence of discussion regarding AI application in communal areas. Hurfurt of Big Brother Watch mentions in a report evaluating the data protection implications of these systems, there seems to be a neglectful stance towards individuals worried about their privacy. The document queries, “Will there be individuals who might oppose or perceive it as invasive?” To which a personnel responds, “Usually not, but one can never predict everyone's reaction.”
Concurrently, comparable AI monitoring technologies designed to oversee large groups of people are becoming more prevalent globally. At the upcoming Paris Olympic Games in France, AI-enhanced video surveillance systems will be deployed to observe multitudes, aiming to identify instances of sudden crowd movements, weaponry usage, and left-behind items.
Carissa Véliz, who holds the position of associate professor in psychology at the Institute for Ethics in AI at the University of Oxford, expresses a preference for systems that remain anonymous over those that can identify individuals. However, she also raises concerns about the potential for these systems to evolve in a concerning direction. Véliz brings up the example of AI experiments conducted on the London Underground, where initially the technology obscured the faces of potential fare evaders, but later shifted strategies to reveal faces and retained the images for a duration exceeding the original intent.
Véliz points out that there's a natural inclination toward increasing surveillance. She notes that people have a desire to observe more, to extend their vision. However, she warns that this tendency towards surveillance can result in greater control, which in turn can erode freedoms, posing a risk to the foundations of liberal democracies.
You May Also Find Interesting …
A Look into the Largest FBI Undercover Operation Ever
The WIRED Artificial Intelligence Elections Initiative: Monitoring over 60 international electoral events
Ecuador finds itself completely without electricity due to drought conditions
Be confident: These are the top mattresses available for purchase on the internet.
Catherine O'Flaherty
Dell Cameron
Dell Cameron
Name: Matthew
Kim Zetter
Kim Zetter
Name: Cameron Dell
Andrew Couts
Additional Content from WIRED
Insights and Tutorials
© 2024 Condé Nast. All rights are held by the company. WIRED could receive a share of revenue from items bought via our website, thanks to our affiliate relationships with various retail partners. It's prohibited to copy, share, broadcast, store, or use the content found on this website in any form without the express consent from Condé Nast. Options for advertisements.
Choose a global website
Discover more from Automobilnews News - The first AI News Portal world wide
Subscribe to get the latest posts sent to your email.