AI
Unseen Eyes: UK Train Stations Employ Amazon AI for Emotion and Behavior Scans Amid Privacy Concerns
To look at this article again, go to My Profile and then select View saved stories.
Matthew Burgess
AI Cameras Powered by Amazon Scanned Faces of Train Passengers in the UK Without Their Knowledge
Documents recently uncovered indicate that numerous individuals using the train services across the United Kingdom were unknowingly subject to facial scans by Amazon's technology in extensive tests of artificial intelligence capabilities. This technology aimed to assess passengers' age, gender, and possible emotions, hinting at the possibility of utilizing this information for advertising purposes in times ahead.
Over the last 24 months, eight railway hubs across the United Kingdom, encompassing major terminals like London's Euston and Waterloo, along with Manchester Piccadilly and several smaller stations, have piloted artificial intelligence surveillance systems using CCTV cameras. The initiative aims to notify employees of safety-related events and possibly diminish specific crime categories.
A series of comprehensive experiments conducted by Network Rail, the organization responsible for rail infrastructure, have implemented object recognition technology—a form of machine learning adept at recognizing objects within video streams—to observe individuals unlawfully accessing the railway lines, assess and forecast congestion on platforms, pinpoint disruptive conduct (such as running, yelling, using skateboards, and smoking), and identify possible bicycle thefts. Additionally, independent experiments have utilized wireless sensors to identify hazardous slippery surfaces, overflowing waste bins, and drains at risk of overflowing.
The extent of the AI experiments, parts of which were already known, came to light through a series of documents released following a request for information under the freedom of information act by the civil rights organization Big Brother Watch. Jake Hurfurt, who leads research and investigations at the organization, expressed significant concern over the introduction and gradual acceptance of AI monitoring in public areas, noting the lack of substantial dialogue and discussion on the matter.
The experiments with artificial intelligence involved a mix of advanced CCTV cameras capable of recognizing objects or movements in their footage and older cameras linked to cloud-based analysis for video processing. According to documents from April 2023, each location was equipped with five to seven cameras or sensors. A spreadsheet from these documents outlines 50 potential applications for AI, though it seems not all were implemented in the pilot programs. At London Euston, there was a plan to test a system designed to identify individuals at risk of suicide. However, the initiative did not proceed as intended because the camera malfunctioned, and the station personnel did not prioritize its replacement, considering the station's status as a 'terminus' station.
Hurfurt points out that the aspect of the trials that raises the most alarm revolves around “passenger demographics.” The documents reveal that this system might employ camera footage to conduct a “statistical analysis of age range and male/female demographics,” and it possesses the capability to “assess emotions,” including “happy, sad, and angry.”
Photos were taken as individuals passed an invisible line close to the ticket gates, and these were forwarded for examination by Amazon's Rekognition system, capable of analyzing faces and objects. According to the documents, this could enable the assessment of passenger contentment, highlighting that "this information could be leveraged to optimize advertising and retail profits."
Experts in artificial intelligence often caution that employing AI for emotion detection is "untrustworthy," with some advocates suggesting a prohibition because discerning an individual's feelings through audio or visual cues is challenging. In October 2022, the Information Commissioner’s Office in the UK publicly advised against the deployment of emotion analysis tools, labeling these technologies as "underdeveloped" and expressing skepticism about their current or future effectiveness.
By [Your Name]
Authored by Simon
Certainly! To assist you effectively
Authored by Kyle MacNeill
Network Rail declined to respond to inquiries from WIRED regarding the ongoing experiments, specifically those related to the implementation of artificial intelligence, the capability to recognize emotions, and issues surrounding privacy.
"The safeguarding of the railway system is a top priority for us, and we implement various sophisticated technologies at our stations to safeguard travelers, our staff, and the railway facilities against criminal activities and potential hazards," states a representative from Network Rail. "In the implementation of these technologies, we collaborate with law enforcement and security agencies to guarantee our measures are appropriate, and we consistently adhere to the necessary laws pertaining to surveillance technology usage."
The extent of the deployment of emotion detection analysis remains ambiguous. Documents occasionally suggest that the application of this technology should be approached with greater skepticism, and feedback from stations indicates that verifying its accuracy is unfeasible. Nonetheless, Gregory Butler, the chief executive of Purple Transform, a data analytics and computer vision firm collaborating with Network Rail on the pilot projects, mentioned that this feature was halted during the experimentation phase and assured that no images were retained while it was operational.
Documents from Network Rail detailing the AI experiments outline several scenarios where cameras could automatically notify employees upon recognizing specific behaviors. These systems avoid the use of contentious facial recognition technology, which seeks to identify individuals by comparing their faces to those in existing databases.
Butler highlights that a key advantage of their system is the faster identification of unauthorized entries. He notes that their analytical tool, SiYtE, has been deployed across 18 locations, such as train stations and trackside areas. Recently, within the last month, Butler reports that their systems have flagged five significant trespassing incidents at two of these locations. These included an incident involving a teenager retrieving a ball from the railway and another incident where an individual was caught gathering golf balls on a high-speed rail line for more than five minutes.
At Leeds train station, renowned for its high traffic volume second only to London's stations, Butler mentions that 350 CCTV cameras are integrated into the SiYtE system. He explains, "The system utilizes analytical tools to monitor the movement of passengers and detect problems like congestion on platforms and unauthorized access, with the capability to distinguish between trespassers and track workers by recognizing their safety gear." He adds, "Artificial intelligence supports the staff, who are unable to keep an eye on every camera at all times, enabling them to quickly identify and respond to any safety concerns."
According to records from Network Rail, surveillance cameras at Reading station have enhanced the efficiency of police inquiries into bicycle thefts by providing precise locations of bicycles in video recordings. The documents indicate that although the technology may not reliably identify a theft in progress, it is capable of recognizing an individual in possession of a bicycle. Additionally, the introduction of air quality monitoring devices in experimental phases has been shown to reduce the amount of time staff spend on manual assessments. A particular application of artificial intelligence utilizes data from these sensors to identify areas of the floor that have become hazardous due to moisture accumulation, subsequently notifying personnel when these areas require cleaning.
The papers shed light on certain aspects of the experiments, yet privacy advocates express worries over the general opacity and the missing discussions surrounding the deployment of AI in communal areas. Hurfurt of Big Brother Watch observes what seems to be an "indifferent stance" towards individuals who might harbor concerns about privacy in one paper aimed at evaluating the data protection implications of these systems. It includes a query: "Might some individuals oppose or perceive it as invasive?" To which an employee responds: "Usually not, though there's no predicting the reactions of a few."
Concurrently, comparable AI monitoring technologies designed for crowd surveillance are being adopted globally. At this year's Paris Olympic Games in France, AI-powered video surveillance will oversee thousands of spectators, aiming to identify surges in crowds, detect weapons, and spot left-behind items.
Carissa Véliz, an associate professor specializing in psychology at the Institute for Ethics in AI at the University of Oxford, expresses a preference for systems that anonymize individuals over those that do not. However, she raises concerns about potential negative trajectories, citing examples from AI experiments conducted in the London Underground. These experiments initially obscured the faces of potential fare evaders in images but later shifted policies to reveal faces and extended the duration for which these images were stored, beyond what was originally intended.
Véliz points out a natural tendency to increase monitoring, stating, "People have an inherent desire to observe more, to look beyond. However, this expansion of surveillance tends to result in greater control, which in turn diminishes freedom, posing a risk to the foundations of liberal democracies."
You May Also Find Interest in…
A Glimpse into the Largest FBI Undercover Operation Ever Conduct
The WIRED AI Elections Initiative: Monitoring over 60 worldwide electoral events
Ecuador finds itself completely without electricity due to a severe drought.
Be at ease: Here are the top mattresses available for purchase on the internet
Catherine O'Flaherty
Cameron Dell
Dell Cameron
Author: Matthew
Kim Zetter
Author: Kim Z
Cameron Dell
Name: Andrew C
Additional Content from WIRED
Critiques and Manuals
© 2024 Condé Nast. Rights protected. WIRED could receive a share of revenue from items bought via our website, thanks to our Affiliate Agreements with store partners. Reproduction, distribution, broadcasting, storage, or any form of usage of this site's content is strictly prohibited without explicit consent from Condé Nast. Advertising Options
Choose a global website
Discover more from Automobilnews News - The first AI News Portal world wide
Subscribe to get the latest posts sent to your email.