AI
AI in the Workplace: Navigating the Privacy and Security Minefield
To go back to this article, go to My Profile and then click on View saved stories.
Catherine O'Flaherty
Artificial Intelligence Joins the Workforce: Is It Reliable?
Emerging generative AI technologies like OpenAI's ChatGPT and Microsoft's Copilot are quickly advancing, raising worries about potential privacy and security risks they may introduce, especially in professional environments.
In May, activists concerned with privacy labeled Microsoft's upcoming Recall feature as a potential "privacy disaster" for its capability to capture screen images of your computer at short intervals. This functionality has drawn scrutiny from the UK's Information Commissioner’s Office, prompting the regulator to seek further details from Microsoft regarding the security of the product set to debut in its Copilot+ PCs soon.
Worries are increasing surrounding OpenAI's ChatGPT due to its upcoming macOS application, which is reported to have the capability to take screenshots. Privacy advocates warn that this feature could lead to the unintentional capture of confidential information.
The United States House of Representatives has prohibited its staff from utilizing Microsoft's Copilot following an assessment by the Office of Cybersecurity, which found it to pose a security risk. The concern is that it could potentially expose House data to cloud services not sanctioned by the House.
In a recent warning, research firm Gartner highlighted the potential dangers associated with using Microsoft 365's Copilot, emphasizing the possibility of internal and external leaks of confidential information. Additionally, Google had to implement changes to its AI Overviews search feature following the widespread sharing of screenshots showcasing incorrect and odd responses, which garnered significant attention last month.
Overexposure Risk
Individuals who employ generative AI technologies in their professional environments face a significant hurdle: the potential for accidental disclosure of confidential information. According to Camden Woollven, the group leader of AI at GRC International Group, a risk management company, these generative AI tools can be likened to "massive absorbers." They accumulate vast quantities of data from the web to refine their linguistic algorithms.
Steve Elcock, the CEO and founder of Elementsuite, has observed that AI enterprises have a strong appetite for data to refine their algorithms, and they are crafting strategies to make this process appealing. Jeff Watkins, the Chief Product and Technology Officer at xDesign, a digital consultancy, highlights the risk involved with the massive data gathering efforts, pointing out that it opens the door for personal data to be integrated into external systems. He further cautions that this data could potentially be retrieved later through sophisticated queries.
Simultaneously, AI systems face the risk of cyberattacks. Woollven explains, "In theory, should a hacker infiltrate the large language model (LLM) driving a firm's AI capabilities, they might extract confidential information, introduce inaccurate or deceptive results, or employ the AI for distributing harmful software."
AI applications designed for everyday users pose clear dangers. Nonetheless, Phil Robinson, a leading consultant at Prism Infosec, a security consultancy firm, points out that there's a growing concern over "proprietary" AI services widely considered suitable for professional environments, like Microsoft Copilot.
"In theory, this method might be exploited to inspect confidential information if the appropriate access restrictions are not firmly in place. Employees might request access to salary structures, details regarding mergers and acquisitions, or papers with login information, all of which could potentially be disclosed or traded."
A further issue revolves around the potential use of AI applications for employee surveillance, which might violate their personal privacy. Microsoft's Recall functionality assures users that “your snapshots remain personal; they are kept on your own computer” and emphasizes that “you maintain control, ensuring a level of privacy you can rely on.”
Authored by Joseph
Authored by Matt
Authored by Matt
Written by Marah Eakin
Elcock mentions, "It appears that it won't be long until this technology is utilized to supervise workers."
Self-Restriction
According to Lisa Avvocato, the Vice President of Marketing and Community at data company Sama, while generative AI presents various risks, there are measures that companies and their staff can implement to enhance their privacy and security protocols. One key strategy she recommends is avoiding the submission of sensitive data into prompts of widely accessible platforms like ChatGPT or Google’s Gemini.
In creating a prompt, aim for vagueness to prevent disclosing excessive details. She suggests, "Pose the question, ‘Compose a budget proposal template,’ instead of stating, ‘This is my budget, create a spending proposal for a confidential project.’” She advises using AI for an initial draft and then incorporating the critical data you need to add.
Avvocato advises that when employing it for investigative purposes, one should sidestep problems similar to those encountered with Google's AI summaries by verifying its output. "Request that it cites its information sources and includes links. In instances where AI is tasked with generating code, it's essential to scrutinize it instead of presuming it's ready for use."
Microsoft has emphasized the importance of proper setup for Copilot, advocating for the principle of "least privilege," which means users should only access necessary information. This is a critical aspect, according to Prism Infosec's Robinson. He stresses that companies need to prepare adequately for implementing such systems and should not blindly rely on the technology, hoping for the best outcome.
Additionally, it's important to highlight that unless you disable this feature in the settings or opt for the enterprise edition, ChatGPT utilizes the information you provide to enhance its algorithms.
Compilation of Guarantees
Companies incorporating generative AI into their offerings assert their commitment to upholding the highest standards of security and privacy. Microsoft is eager to highlight the measures for security and privacy incorporated into its Recall feature, alongside providing users with the option to manage this feature through Settings > Privacy & security > Recall & snapshots.
Google states that the introduction of generative AI in Workspace does not alter its core privacy safeguards, which ensure users have decision-making power and management over their data. Furthermore, it emphasizes that this data is not utilized for advertising purposes.
OpenAI emphasizes its commitment to safeguarding security and privacy across its offerings, stating that versions tailored for businesses come with additional safeguards. "Our goal is for our AI systems to understand the world, not to gather data on private citizens. We actively implement measures to secure user data and ensure privacy," a representative from OpenAI conveyed to WIRED.
OpenAI provides mechanisms for managing the utilization of data, featuring options for users to retrieve, transfer, and remove their personal details. Additionally, users have the option to decline the utilization of their content for enhancing the platform's algorithms. The organization asserts that ChatGPT Team, ChatGPT Enterprise, and its Application Programming Interface (API) do not undergo training with data or dialogues, and by standard practice, its algorithms do not adapt based on user interaction.
In any case, it appears that your artificial intelligence colleague isn't going anywhere. According to Woollven, as these technologies grow more advanced and ubiquitous in our work environments, the hazards associated with them are set to increase. "The rise of multimodal AI, like GPT-4o, which has the capability to understand and produce images, audio, and video, is already upon us. This means companies now have to be vigilant about protecting more than just textual information."
Bearing this in mind, individuals and companies should adopt the attitude of handling AI as they would any external service, suggests Woollven. “Avoid disclosing anything you wouldn’t be comfortable having openly shared.”
Recommended for You…
In your email: Will Knight delves into the progression of artificial intelligence in his series, Fast Forward.
Step into the chaos of digital ad buying
What is the required number of electric vehicle charging points to supplant petrol stations in the United States?
A charitable organization attempted to reform the technology sector but failed to maintain governance over itself.
Eternal Sunshine: Discover the Top Shades for Every Endeavor
Matthew Burgess
Dhruv Mehrotra
Matthew Burgess
Matthew Burgess
Cameron Dell
Matthew Burgess
Matthew Burgess
Dan Goodin, a writer for Ars Technica
Additional Content from WIRED
Critiques and Tutorials
© 2024 Condé Nast. All rights reserved. WIRED might receive a commission for products bought via our website, thanks to our Affiliate Partnerships with retail stores. Content from this website cannot be copied, shared, broadcasted, stored, or used in any form without explicit written consent from Condé Nast. Advertisement Choices
Choose a global website
Discover more from Automobilnews News - The first AI News Portal world wide
Subscribe to get the latest posts sent to your email.