AI
Privacy in the Age of AI: Unraveling the Controversies Surrounding Musk’s Grok AI
To go back to this article, go to My Profile and then select View saved stories.
Essential Insights on Grok AI and Privacy Considerations
Elon Musk and Sam Altman joined forces in 2015 to create OpenAI, guided by a moral foundation aimed at advancing artificial intelligence for the greater good, steering clear of the influence of major financial entities.
A decade later, which saw a remarkable breakdown in the relationship between Musk and Altman, the landscape has significantly changed. In the midst of lawsuits involving his once close business ally, Musk's newest venture, xAI, has introduced its formidable rival, Grok AI.
Termed as a search aide powered by artificial intelligence that incorporates elements of wit and a slight inclination towards defiance, Grok is intentionally created with less restrictive measures compared to its leading rivals. As a result, it's not shocking that Grok frequently encounters issues with generating unfounded information and displaying prejudice, particularly being held accountable for disseminating false news regarding the 2024 election.
Simultaneously, its methods of safeguarding user data are being closely examined. In July, European authorities criticized Musk following revelations that X platform users were by default enrolled into a scheme allowing their posts to be utilized for training Grok.
The ability to generate images within its Grok-2 extensive language model has raised alarms. Shortly after its debut in August, it became apparent that users could effortlessly produce controversial and provocative images of political figures, such as Kamala Harris and Donald Trump.
What are the primary concerns regarding Grok AI, and what measures can you take to safeguard your X data from contributing to its training?
Comprehensive Incorporation
Musk is thoroughly incorporating Grok within X, utilizing it for personalized news streams and creating posts. Currently, it's in its testing phase and exclusively accessible to Premium+ members.
Camden Woollven, who leads the AI division at GRC International Group—a consultancy specializing in data protection and privacy—highlights that one of the advantages is the ability for Grok to engage in conversations about ongoing events, thanks to access to up-to-the-minute information from X.
Nathan Marlor, who leads the data and AI department at Version 1, a company specializing in facilitating technology adoption for businesses, emphasizes that Grok aims to differentiate itself from rivals by being clear and opposing "woke" culture.
To ensure openness, earlier this year, the team at Grok decided to make the core algorithm available to the public. Nonetheless, as part of its effort to maintain a stance against political correctness, Grok was designed with significantly reduced safety measures and a lower emphasis on mitigating bias compared to similar initiatives by OpenAI and Anthropic, according to Marlor. He suggests that while this strategy may offer a truer representation of its foundational training material—the internet—it also risks continuing the spread of prejudiced information.
WIRED reached out to X and xAI multiple times to request their input, but the company has yet to reply.
Due to its extensive openness and lack of regulation, the AI assistant Grok has been found disseminating incorrect information about US elections. Officials in charge of elections in Minnesota, New Mexico, Michigan, Washington, and Pennsylvania have addressed a letter of complaint to Musk, following incidents where Grok shared inaccurate details regarding the voting deadlines in these states.
Grok swiftly addressed the matter at hand. When inquired about election-related topics, the AI chatbot is programmed to direct users to Vote.gov for reliable and current details on the 2024 US Elections, reports the Verge.
However, X emphasizes that users are responsible for assessing the AI's correctness. "Grok is still in its initial stages," xAI mentions on its support page. Consequently, the chatbot might "assert incorrect facts with assurance, inaccurately summarize, or overlook certain details," xAI cautions.
"xAI urges you to confirm the accuracy of information on your own," they note. "Refrain from disclosing personal details or any sensitive and private information while interacting with Grok."
Concerns are also rising around the extensive gathering of data, particularly because users are defaulted into sharing their X data with Grok, irrespective of whether they engage with the AI assistant.
The Grok Help Center page of xAI indicates that xAI might employ your X posts along with your user activities, contributions, and outcomes from using Grok to enhance and adjust its performance.
Marijus Briedis, the Chief Technology Officer at NordVPN, highlights that Grok's training approach poses considerable risks to privacy. He points out that not only does the AI tool have the potential to retrieve and scrutinize information that may be confidential or delicate, but it also raises further alarms due to its capacity to create visuals and material with little to no oversight.
According to CreateFuture's Senior Product Manager, Angus Allan, Grok-1's learning was based on data publicly accessible until the third quarter of 2023, without any pre-training on X data, which encompasses public posts on X. In contrast, Grok-2's development involved comprehensive training on "posts, interactions, inputs, and outcomes" generated by users of X, with an automatic inclusion setting for all users, as stated by the digital consultancy firm known for its expertise in implementing AI technologies.
The European Union's GDPR clearly mandates securing permission to utilize personal information. Allan suggests that xAI might have "overlooked this requirement for Grok."
Regulatory authorities in the European Union swiftly acted to compel X to halt the training involving EU users shortly after the introduction of Grok-2 the previous month.
Not adhering to laws that protect user privacy may attract attention from regulatory bodies in other nations. Allan notes that although the US lacks an equivalent system, the Federal Trade Commission has penalized Twitter in the past for failing to honor the privacy settings of its users.
Choosing to Withdraw
To avoid having your content utilized for training Grok, you can make your profile private. Another method is to adjust your X privacy preferences to exclude your data from being used in subsequent model development.
Navigate to Privacy & Safety, then Data Sharing and Personalization, and find Grok. Under the Data Sharing section, deselect the choice that states, "Permit the use of your posts, interactions, inputs, and outcomes with Grok for the purposes of training and refinement."
Allan advises that it's important to sign back into X and choose to opt out, even if you've stopped using it. He cautions that without your direct refusal, X has the freedom to utilize any of your previous content, such as photos, for the development of upcoming models.
xAI indicates that users have the option to erase their entire chat history in one go. Once conversations are deleted, they're purged from the system in a span of 30 days, except in cases where retention is necessary due to legal or security obligations.
The future development of Grok remains uncertain, but based on its current behavior, it's clear that Musk's AI aide merits close observation. To protect your personal information, pay close attention to what you post on X and stay up-to-date with any changes to its privacy rules or service agreements, according to Briedis. “By actively participating in these settings adjustments, you can enhance your oversight over the management and possible application of your data by systems such as Grok.”
Explore Further…
Delve into Politics Lab: Subscribe to our newsletter and tune into our podcast
The outcome of providing individuals with no-strings-attached financial
Not everyone experiences weight loss with Ozempic
The Pentagon is seeking to allocate $141 billion towards a machine designed for end-of-the-world scenarios
Event Announcement: Attend the Energy Tech Summit happening on October 10 in Berlin
Additional Content from WIRED
Evaluations and Manuals
© 2024 Condé Nast. All rights are reserved. WIRED could receive a share of the revenue from products bought via our website, which is part of our Affiliate Partnerships with retail stores. Reproduction, distribution, transmission, storage, or any form of usage of the content on this website is prohibited without the explicit written consent of Condé Nast. Advertisement Options
Choose a global website
Discover more from Automobilnews News - The first AI News Portal world wide
Subscribe to get the latest posts sent to your email.