AI
Open Source AI Sparks Dialogue Among Founders, FTC, and Policy Makers at Y Combinator Event
To go back to this article, head to My Profile, and then click on View saved stories.
Open Source AI Generates Excitement Among Entrepreneurs and Regulatory Attention
Y Combinator, renowned for its showcase events known as Demo Days where emerging companies present their products aiming to become the next big success like Airbnb, held a unique gathering on Thursday. In its spacious and modern venue in San Francisco, the startup accelerator brought together entrepreneurs, investors, and American government officials to discuss a crucial issue dominating the startup landscape: the role of AI as the newest battleground between major technology corporations and smaller enterprises.
For numerous startup founders in the tech industry, inquiries regarding artificial intelligence can have critical importance. Following the release of ChatGPT by OpenAI in the latter part of 2022, the discourse in this burgeoning field of AI has been largely shaped by the advancements from OpenAI, as well as subsequent developments from the AI divisions of giants like Google and Microsoft. However, the growing accessibility and power of open source AI technologies may significantly alter these trends.
On Thursday, the excitement for open source was not limited to YC-supported entrepreneurs who could gain from a more affordable method to leverage the capabilities of generative AI. Lina Khan, the head of the Federal Trade Commission, emerged as one of the leading proponents of open source AI during the gathering.
Addressing an audience of around 200 business founders, Khan emphasized that it would not be overstating things to say that the vast majority of Y Combinator's top-performing startups owe their existence to open source software and its supporting community. The FTC is currently concentrating on identifying and investigating open-weights AI models, which are not quite as "open" as completely open source AI models. According to Khan, the availability of open-weights models enables "emerging companies to launch their concepts into the marketplace."
Khan explicitly outlined the implications for the audience in attendance. "The current environment has given major tech corporations an advantage in the competition around artificial intelligence," she stated. "Owning the foundational elements means you can dominate the industry and exclude smaller enterprises lacking the necessary framework to contend."
Khan articulated her views in the context of advocating for equitable and transparent competition within the technology sector, while also supporting the measures taken by the FTC and the US Justice Department to regulate major technology firms in the last four years. Speaking at YC on Thursday, US Assistant Attorney General Jonathan Kanter echoed this sentiment, highlighting the agencies’ commitment to supporting smaller tech entities, a notion likely to appeal to the YC audience.
The presence of two influential regulators at an event predominantly attended by advocates of rapid innovation and disruption could have been considered implausible just a year prior. Y Combinator, established in 2005 by entrepreneurs Paul Graham and Jessica Livingston, among others, is better recognized for its rigorous mentoring and boot camp-style environment it offers to emerging businesses than for any connections to the political circles in Washington, D.C.
The change was made deliberately. In October of the previous year, Y Combinator's head, Garry Tan, welcomed policy specialist Luther Lowe to enhance dialogue between Y Combinator and Washington D.C. Lowe's transition to the team, with his extensive background of over 15 years in public policy at Yelp and being a notable adversary of Google, has evidently introduced a new level of sophistication and discussions on significant policy issues to Y Combinator gatherings. Since Lowe's arrival, FTC Chairperson Khan has addressed Y Combinator entrepreneurs for the second time this Thursday.
Several discussions from the previous day were peppered with the expected abbreviations typical of such a gathering of esteemed speakers, including YC, FTC, AI, LLMs. Yet, a common theme woven into these talks, and arguably central to them, was a strong endorsement for open-source AI.
This marked a significant shift (or a comeback, for those familiar with Linux) from the 2010s, a decade characterized by an enthusiasm among developers to package their technologies into containers and entrust them to larger platforms for dissemination.
The occurrence took place merely 48 hours following Meta's CEO Mark Zuckerberg's announcement endorsing open source AI as the future direction. During this announcement, Zuckerberg introduced Llama 3.1, the newest iteration of Meta's open source AI model. In his statement, Zuckerberg expressed a sentiment shared by many in the tech community, highlighting their desire to break free from the limitations imposed by Apple, including its restrictive policies and application charges.
Interestingly, despite its name, OpenAI has opted not to utilize an open-source model for its most advanced Generative Pre-trained Transformers (GPTs). This decision leads to the organization keeping portions of its code confidential and not disclosing the specific "weights" or parameters of its most sophisticated AI technologies. Additionally, OpenAI imposes fees for businesses seeking access to its high-level technological offerings.
Ali Golshan, the cofounder and CEO of Gretel.ai, a company specializing in synthetic data, emphasizes that the advancement of complex AI structures and frameworks has shown that utilizing compact, precisely adjusted open-source models yields superior outcomes compared to larger models like OpenAI's GPT4 or Google's Gemini, particularly for business-related activities. (Golshan did not attend the YC event).
"Dave Yen, who oversees the Orange Collective fund that supports emerging founders from YC with investments from accomplished YC alumni, dismisses the notion of OpenAI being in opposition to the rest of the industry. He believes the focus should be on fostering a competitive landscape that is equitable, where startups aren't in jeopardy of collapsing overnight due to any shifts in OpenAI's pricing strategies or regulatory stances."
“Yen emphasized the importance of having protections in place, but also cautioned against imposing unnecessary restrictions.”
Experts in technology who tend to be more prudent have raised concerns about the intrinsic dangers of open-source artificial intelligence models. The primary issue is that these models are freely accessible to everyone. This accessibility increases the likelihood of individuals with harmful intentions utilizing these tools for nefarious purposes, compared to the less appealing option of expensive proprietary AI systems. Additionally, it has been highlighted by scholars that malevolent users can effortlessly and affordably modify these open-source AI models to bypass any existing security measures.
The concept of "open source" can be somewhat misleading when it comes to certain AI models, as highlighted by WIRED's Will Knight. The training data for these models is often not disclosed, their licenses can impose limitations on what developers are able to create, and in the end, the original creators of the models might reap more benefits than anyone else.
Several lawmakers have resisted the unchecked growth of major AI technologies, among them California's state senator, Scott Wiener. Wiener's proposed legislation, the AI Safety and Innovation Bill or SB 1047, has stirred debate among tech communities. This bill seeks to set guidelines for the creation of AI systems that have a training cost exceeding $100 million. It mandates comprehensive safety assessments and vulnerability analyses before these systems can be deployed, offers protection to individuals exposing unethical practices within AI research environments, and empowers the state attorney general to take legal action should an AI system result in significant damage.
At the YC gathering on Thursday, Wiener took the stage, engaging in a dialogue overseen by Bloomberg journalist Shirin Ghaffary. He expressed his profound appreciation for members of the open source community who have voiced their opposition to the proposed legislation. He acknowledged that the state has implemented a number of revisions following the constructive criticism received. Wiener highlighted one specific amendment, noting that the legislation now includes a clearer strategy for deactivating an open source AI system that has become uncontrollable.
The renowned guest speaker at Thursday's function, who was unexpectedly included in the schedule, was Andrew Ng. Ng is known for co-founding Coursera, establishing Google Brain, and serving as the chief scientist at Baidu. During the event, he, alongside numerous participants, advocated for open-source frameworks.
"Ng stated, “This is a pivotal moment where we decide whether entrepreneurs can continue to innovate, or if instead, we should allocate the funds that would have been used for software development to engage legal counsel.”
Suggested for You…
Direct to your email: A selection of our top stories, curated daily just for you.
A problematic update from CrowdStrike brought global computer systems to a halt.
The Main Event: When could the Atlantic Ocean potentially split apart?
Introducing the age of extreme online consumption
Olympics: Stay updated with our extensive coverage from Paris this season right here.
Additional Content from WIRED
Critiques and Tutorials
© 2024 Condé Nast. All rights are reserved. Purchases made through our website may generate revenue for WIRED, as a result of our affiliate agreements with retail partners. Content from this site cannot be copied, shared, broadcast, stored, or used in any form without explicit written consent from Condé Nast. Choices regarding advertisements.
Choose a global website
Discover more from Automobilnews News - The first AI News Portal world wide
Subscribe to get the latest posts sent to your email.