Connect with us

AI

From Moonshot to Shutdown: Inside Google’s Ambitious Quest to Embolden AI with Robotics

Published

on

To review this article again, go to My Profile and then select Saved Stories.

Embarking on Google’s 7-Year Quest to Equip AI with a Physical Form

In the chilly beginnings of 2016, I had freshly embarked on a journey with Google X, Alphabet’s clandestine hub of innovation. My role was to devise a strategy for utilizing the staff and technology inherited from the acquisition of nine robotics firms. The atmosphere was rife with bewilderment. Andy Rubin, known as "the father of Android" and the previous leader of the project, had unexpectedly departed under enigmatic conditions. Larry Page and Sergey Brin, in their intermittent appearances, attempted to steer the project in their limited free moments. A few months prior, Astro Teller, the captain of Google X, had decided to integrate the robotics team into the lab, endearingly dubbed as the moonshot factory.

I joined because Astro had persuaded me that Google X—eventually referred to simply as X—was going to be unlike any other corporate innovation center. The founders aimed for groundbreaking ideas and possessed the necessary long-term investment to bring those ideas to life. Having founded and sold multiple tech enterprises throughout my career, this opportunity felt like a perfect fit. X appeared to be exactly the kind of ambitious project Google should be involved in. From my own journey, I understood the immense challenge of creating a company that could, in the words of Steve Jobs, make a significant impact on the world, and I was convinced that Google was the ideal place for taking on such grand challenges. One of the bold ventures was the development of AI-powered robots, envisioned to one day coexist and collaborate with us.

More than eight years have passed, and a year and a half since Google chose to abandon its significant investment in robotics and AI, it appears a new robotics startup emerges almost weekly. My belief has only deepened that the arrival of robots is inevitable. However, I am apprehensive about whether Silicon Valley, known for its emphasis on "minimum viable products" and venture capitalists' typical reluctance to fund hardware, will have the endurance to lead in the worldwide endeavor to equip AI with physical forms. Furthermore, a substantial portion of the investments being made seem to be targeting the incorrect areas. Here's the explanation.

The Concept of "Moonshot"

In 2010, Google X emerged, housing what would later be dubbed the Everyday Robots project, stemming from the ambitious belief that Google had the capacity to address some of the globe's most daunting challenges. To promote a distinct culture and encourage expansive thinking, X was deliberately positioned in a separate facility a few miles from Google's main hub. There was a significant emphasis on pushing X members to embrace substantial risks, engage in swift prototyping, and even view failure as a positive sign of setting exceptionally ambitious goals. By the time I joined, the laboratory had already given birth to innovations like Waymo and Google Glass, along with projects that seemed torn from the pages of a sci-fi novel, such as airborne turbines generating power and high-altitude balloons designed to extend internet coverage to remote areas.

X projects distinguished themselves from typical Silicon Valley ventures through their grand and forward-thinking ambitions. For a project to gain the prestigious label of a moonshot within X, it had to meet a specific set of criteria. Initially, the endeavor had to tackle an issue impacting hundreds of millions, or possibly billions, of individuals. Next, it was essential for there to be an innovative technological advancement that opened up a novel approach to addressing the challenge. Lastly, the project required a groundbreaking approach to business or product development that likely seemed barely feasible or slightly unconventional.

Routine automation of robots separating and discarding garbage.

The Dilemma of AI Embodiment

Astro Teller, self-styled as the Captain of Moonshots, seems like the perfect fit for leading X. His presence is unmistakable in the Google X premises, a vast, three-floor space once a department store, thanks to his characteristic choice of rollerblades for navigation. Add to that his distinctive ponytail, a perpetually welcoming grin, and the unique moniker, Astro, and it feels like stepping into a scene from HBO’s Silicon Valley.

Astro and I sat together to ponder over the potential uses of the robotic companies Google had purchased, recognizing the necessity for action. Yet, what action? The robots available up until then were typically bulky, simple-minded, and hazardous, relegated to factories and storage facilities where intense oversight or physical barriers were essential to safeguard human workers. The challenge we faced was how to develop robots that could be both beneficial and secure for daily use. This endeavor demanded a fresh strategy. We were tackling a vast issue that had global implications—the demographic shifts toward older populations, diminishing labor forces, and widespread labor deficits. We were convinced that the key to overcoming this challenge lay in artificial intelligence, a belief we held as early as 2016. Our innovative solution proposed the creation of completely autonomous robots, designed to assist in a broad spectrum of routine tasks, marking a pivotal step toward addressing these societal challenges.

In essence, we were set to provide AI with a tangible presence in the real world, and I was certain that if there was any place capable of bringing such an ambitious project to life, it would be X. The journey ahead was expected to be lengthy and challenging, requiring a readiness to embrace outlandish concepts and accept numerous failures. Achieving our goal would necessitate groundbreaking advancements in both artificial intelligence and robotics, and the expenses were projected to reach into the billions. (Indeed, billions.) Our team harbored a strong belief that a fusion of AI and robotics was not just possible but imminent, looking slightly into the future. We were convinced that what had once been the domain of science fiction was on the cusp of becoming tangible reality.

Daily Robot distributes flowers on February 14th.

Checking in with Mom

Roughly every seven days, I'd catch up with my mom over a call. She'd dive right in with her usual inquiry, bypassing greetings: "When will the robots arrive?" Her focus was on figuring out when our robots would be available to assist her. My reply was consistently, "Not anytime soon, Mom," to which she'd retort, "They better hurry up!"

Residing in Oslo, Norway, my mother had access to excellent public health services. Health aides visited her home thrice daily, assisting with various activities and household duties, primarily due to her severe Parkinson’s disease. Although these aides made it possible for her to maintain her independence in her residence, she aspired for robotic assistance to manage the numerous minor, yet challenging and sometimes humiliating obstacles, or just to have a robotic arm for support.

The Daily Robot routinely cleans restaurant surfaces following dining periods.

It's Quite Challenging

"Do you realize that robotics involves dealing with an entire system, correct?" Jeff inquired, giving me an inquisitive stare. It appears every team has their own "Jeff"; for us, it was Jeff Bingham. A lean, sincere individual holding a doctorate in bioengineering, Jeff was raised on a farm and known amongst us for his profound understanding and expertise in virtually all subjects. Even now, should you question me regarding robots, one of the initial points I'd highlight is that, essentially, it revolves around a systems issue.

Jeff emphasized the complexity of robots, highlighting that their effectiveness hinges on their most vulnerable components. For instance, if the robot's vision system struggles to function in bright sunlight, a single beam of light through a window could render it inoperative. Similarly, if it fails to recognize stairs, it might fall, potentially causing harm to itself and others nearby. Essentially, crafting a robot capable of coexisting and cooperating with humans presents significant challenges. It's an incredibly difficult task.

For many years, efforts have been made to code different types of robots to carry out basic activities, such as picking up a cup from a table or unlocking a door. However, these attempts have consistently resulted in fragile systems that collapse under the smallest shift in circumstances or slight alterations in surroundings. The reason? The unpredictability of the real world (consider the example of an unexpected beam of sunlight). And this doesn't even touch on the more complex challenges, such as navigating the chaotic and crowded areas in which we reside and operate.

Upon giving this thorough consideration, it becomes apparent that without imposing strict control, by ensuring every item is precisely positioned and the lighting conditions are perfectly stable, the act of, for instance, taking a green apple and setting it in a glass bowl on a table, turns into a task that is nearly insurmountable. This explains the reason behind encasing industrial robots in protective barriers. It allows for the environment, including the illumination and location of objects they handle, to remain consistent, eliminating the risk of accidentally hitting someone on the head.

Notice alerting Google staff and guests about the presence of robots roaming freely.

Understanding the Education of Autonomous Machines

Larry Page once shared with me a concept that seemed hard to grasp at first: to effectively develop robots capable of working beside humans, you only require the expertise of 17 individuals specializing in machine learning. I was skeptical, questioning how such a small team could establish the necessary technological foundations. However, Page was adamant, simplifying the matter to "just 17 is enough." This left me puzzled – why precisely 17? Why not a smaller or larger group? Clearly, there was an aspect I wasn't catching.

In essence, there are two main methods of integrating AI into robotics. The initial method is a combined approach. It involves using AI for specific segments of the system and then connecting these parts with conventional coding. In this method, the vision component might utilize AI to identify and classify objects in its environment. After identifying these objects, it compiles them into a list that is then processed by the robot's programming to determine actions based on predefined rules in the software. For instance, if the robot is programmed to pick an apple from a table, the AI-enabled vision system would identify the apple, and the robot's software would select the item labeled as “type: apple” from the list. Following this, it would use standard robotics control software to grab the apple.

An alternative method, known as end-to-end learning or e2e, focuses on mastering complete activities such as "grasping an item" or broader tasks like "cleaning a table." This technique involves training robots using vast quantities of data, similar to how humans learn physical tasks. For instance, when instructing a young child to pick up a cup, they might need to first understand what a cup is and that it can hold liquids. Through exploration, they might frequently tip it over or spill its contents. However, through observation, mimicking others, and plenty of experimental interactions, they gradually learn how to handle the cup properly, eventually performing the task effortlessly without conscious thought about the individual steps involved.

From my understanding of Larry's perspective, he believed that the true measure of success lies in our ability to prove that robots can independently learn and execute complete tasks from start to finish. Achieving this would significantly increase our chances of creating robots capable of handling tasks in the complex and ever-changing real world, thereby marking our efforts as truly groundbreaking. He emphasized that monumental achievements are often the result of small, dedicated teams rather than large groups of engineers. Clearly, a robot's functionality isn't limited to its artificial intelligence capabilities, so I didn't halt our other engineering projects—we still needed to design and construct the physical aspects of the robot. However, it became apparent that showcasing a robot's ability to successfully complete a task from beginning to end would inspire confidence in our potential to achieve what is often described in ambitious projects as breaking free from Earth's gravitational constraints. In Larry's view, all other aspects were simply details that would follow.

Robot on the job hunt! (Team having fun post-announcement of Everyday Robots winding down in January 2023.)

At the Arm-Farm

Hailing from Germany, Peter Pastor is a robotics expert who earned his doctorate from the University of Southern California. Whenever he found time away from his professional duties, Peter attempted to match his girlfriend's prowess in kiteboarding. Inside the laboratory, he was often busy managing 14 custom-made robotic arms, which were eventually substituted with seven commercial Kuka robotic arms in an arrangement affectionately referred to as "the arm-farm."

Around the clock, these robotic arms were engaged in continuous attempts to grasp various items such as sponges, Lego pieces, rubber ducks, and faux bananas from a container. Initially, their instructions were to approach the container from an arbitrary point above, maneuver their pincer-like appendage into it, clamp down, lift, and then assess whether they had successfully grabbed an object. An overhead camera monitored and recorded what was inside the bin, the arm's maneuvers, and whether it succeeded or failed in its task. This process persisted for several months.

Initially, robots achieved a mere 7 percent rate of success. However, their performance improved through positive reinforcement each time they accomplished a task. This essentially entailed adjusting the "weights" within their neural networks. Such adjustments serve to promote behaviors that are wanted and discourage those that are not. As a result, these robotic arms became adept at grasping objects, succeeding over 70 percent of the time. A significant milestone was witnessed when Peter shared a video with me. It showcased a robotic arm skillfully moving aside other items to securely grab a yellow Lego block. This maneuver was not the result of direct, conventional programming but was something the robot had learned on its own.

Yet, the notion of seven robots toiling for several months just to grasp how to handle a rubber duck simply wasn't sufficient. Indeed, even if hundreds of robots were to train for years, it wouldn't adequately prepare them for their initial practical tasks in the real world. Consequently, we developed a simulator hosted on the cloud and, in 2021, generated over 240 million simulated robot scenarios.

Imagine the simulator as an expansive virtual game that mirrors the complexities of real-world physics accurately enough to replicate the mass of objects or the resistance offered by different surfaces. In this digital arena, countless virtual robots, designed to resemble their physical counterparts, utilize artificial vision and their virtual forms to execute tasks such as lifting a cup from a surface. Operating simultaneously, these digital entities engage in numerous trials and errors, amassing extensive data to refine AI algorithms. Once these virtual robots achieve a satisfactory level of proficiency, their learned algorithms are transferred to tangible robots for final adjustments in the real-world environment, allowing them to perfect their newly acquired skills. I've always likened this simulation process to robots spending the night dreaming, only to awaken with new knowledge.

Initial version of a robot being trained to categorize waste.

The Awakening to Data's Power

The moment we all realized the existence of ChatGPT felt like witnessing a miracle. Here was a machine learning-driven tool capable of crafting full passages, resolving complex queries, and maintaining continuous conversations. However, we quickly grasped its inherent drawback: achieving such feats required a vast trove of data.

Currently, robots utilize advanced language models to comprehend spoken words and vision systems to interpret visual information, which results in impressive demonstrations on YouTube. However, the challenge of enabling robots to independently operate and coexist with humans presents a significantly larger issue in terms of data. Despite the use of simulations and various methods for generating training data, it is quite improbable that robots will suddenly become highly skilled overnight, equipped with a comprehensive model that oversees the entire system.

It's still uncertain how sophisticated the functions are that we'll be able to instruct robots to carry out using just AI. My belief has evolved to the point where I think it will necessitate the involvement of an extensive number, possibly hundreds of thousands or even millions, of robots engaged in real-world activities to amass adequate data for training end-to-end models. These models are necessary for enabling robots to undertake tasks beyond those that are highly specific and clearly defined. The development of robots that can perform valuable tasks—such as cleaning and sanitizing all the surfaces in a restaurant or tidying up the beds in a hotel—will likely depend on a combination of AI and conventional programming for the foreseeable future. In simpler terms, the idea of robots acting independently beyond their programming and control is not something we should anticipate happening in the near future.

Initial robotic model in training for door handling and restroom sanitation.

Is Replication Necessary?

Horses excel in locomotion using their four limbs, yet our automobiles are crafted with wheels. The human brain operates with remarkable efficiency, vastly surpassing the capabilities of silicon-based computers. So, why do our vehicles not walk, and why did we not shape computers after the human brain’s architecture? My point is, the objective in creating robots should extend beyond mere imitation.

During a gathering with a team of tech experts at Everyday Robots, I came to an interesting realization. As we engaged in lively debate around a conference table, the topic of whether our robots should be equipped with legs or wheels emerged. These conversations often shifted from being rooted in evidence or science to resembling more of a passionate, ideological argument. There's a strong sentiment among some that robots ought to mimic the human form. The logic behind this perspective is solid. After all, our environments are designed with human mobility in mind, and since we possess legs, it stands to reason that perhaps our robotic counterparts should as well.

Approximately half an hour into the discussion, Vincent Dureau, the highest-ranking engineering leader present, broke the silence. His straightforward remark was, "If I can make it there, then the robots can too." Vincent, who was using his wheelchair, made this statement. Following his comment, silence enveloped the room, effectively ending the argument.

The truth is, the construction of robot legs involves intricate mechanical and electronic components. They lack speed and tend to compromise the robot's stability. Moreover, they are less energy-efficient than wheels. Observing companies strive to develop humanoid robots—machines designed to closely resemble human appearance and capabilities—I sometimes question whether this reflects a lack of creativity. There's a vast array of designs that could work alongside humans effectively. Why obsess over trying to replicate human features? At Everyday Robots, our approach was to simplify the robot's design as much as we could—aiming to expedite their ability to carry out tasks in the real world, thereby accelerating our acquisition of useful data. Vincent's observation served as a reminder that our primary focus should be on tackling the most challenging and significant problems.

Office Assignment

While stationed at my workstation, a unipedal robot featuring a head that resembles a rounded-edge rectangle approached me. It called me by my name and inquired whether it could clean up the area. Agreeing, I moved out of its way. Shortly after, it had gathered a few discarded paper cups, an empty Starbucks iced tea container, and a plastic wrapper from a Kind bar. These were then deposited into a waste compartment at its lower section. Following this, it acknowledged me with a nod and proceeded to the adjacent workstation.

The launch of the neat-desk service marked a significant breakthrough, indicating that we were advancing in solving a complex aspect of robotics. This service demonstrated the robots' capability of using AI to accurately identify both humans and objects. Benjie Holson, a software engineer with a background in puppeteering who spearheaded the development of this service, favored a combined method. He wasn't opposed to completely automated tasks; rather, he preferred a practical approach of putting them to immediate use. Should the machine learning experts develop a fully automated solution that outperformed his team's programming, they would readily adopt these new techniques.

The sight of robots bustling about, performing tasks such as cleaning desks, had become a familiar scene for me. Every now and then, a newcomer or a guest would catch my attention with their expressions of astonishment and delight as they observed the robots at work. It was through their reactions that I was brought back to the realization of how extraordinary this scene actually was. Rhys Newman, our chief designer, captured this sentiment perfectly one day as a robot passed us, commenting in his Welsh accent, "This has turned into the everyday for us. Strange, right?"

Catie Cuan, the resident artist at Everyday Robot, performs a dance alongside a robot.

Dance of the Day

At Everyday Robots, our consulting team was composed of a diverse group of experts, including a philosopher, an anthropologist, a past union leader, a historian, and an economist. We engaged in passionate discussions about various economic, social, and philosophical issues, such as: What would the economic repercussions be if robots coexisted with us? How would labor be affected immediately and in the future? In an era dominated by smart machines, what defines human identity? And, how can we construct these machines to ensure they foster a sense of safety and belonging?

In 2019, when I informed my team about our search for an artist in residence who could explore innovative, unusual, and unpredictable projects with our robots, I encountered Catie Cuan. At that time, Catie was pursuing her PhD in robotics and artificial intelligence at Stanford University. What stood out to me was her background as a professional dancer, with performances at prestigious venues such as the Metropolitan Opera Ballet in New York City.

Chances are you've come across videos on YouTube featuring robots executing dance routines—where the robot follows a set pattern of movements in time with music. While entertaining, these performances are akin to what one might encounter on an attraction at Disneyland. I posed a question to Catie about the possibility of robots improvising and interacting in a manner similar to human beings, or like birds in a flock or fish in a school. To achieve this vision, she, along with a handful of other engineers, created an AI algorithm that learns from the stylistic choices of a choreographer, who in this instance, is Catie herself.

In the calm of the evenings and occasionally on weekends, when their daily tasks were paused, Catie and her makeshift crew would assemble a group of robots in a spacious central atrium within X. These robots, in groups, started to synchronize their movements, sometimes in a hesitant manner yet forming captivating patterns, exhibiting a sense of inquisitiveness and at times, elegance and allure. Tom Engbersen, a Dutch roboticist who enjoyed recreating famous paintings in his leisure, embarked on a collaborative project with Catie to delve into the potential of dance-responsive robots or those capable of musical performance. An innovative thought occurred to him: What if the robots themselves turned into musical instruments? This sparked a venture where the movement of each robot's joint produced a distinct sound. A bass note would emanate from the base's movement; a bell-like sound was produced by the opening and closing of a gripper. Activating the music mode transformed the robots into creators of unique musical compositions with every movement. Regardless of their task—navigating corridors, sorting waste, cleaning surfaces, or moving in unison—the robots adopted an auditory and visual form that was both novel and engaging, offering an experience unlike any I had previously encountered.

The Start of Something New

As 2022 was drawing to a close, the debate between using purely end-to-end techniques versus a hybrid approach was still in full swing. Peter and his team, in collaboration with our peers at Google Brain, had dedicated their efforts to incorporate reinforcement learning, imitation learning, and the transformer architecture, which is the foundation of Large Language Models (LLMs), into a variety of robotic tasks. They were achieving notable success in demonstrating that robots could be taught tasks in a manner that was not only generalizable but also robust and adaptable. At the same time, the team headed by Benjie was focused on integrating AI models with conventional programming methods to develop and test robotic services. These services were being designed for deployment in everyday environments, directly interacting with humans.

At the same time, the initiative known as Project Starling, the name given to Catie's collective robot display, began to alter my perception of these devices. I observed the fascination, delight, and inquisitiveness that the robots sparked in onlookers. This experience made me realize that the manner in which robots navigate our spaces and their sounds can evoke profound emotional responses, playing a significant role in our acceptance and integration of them into our daily routines.

In essence, we were on the brink of fully leveraging our most significant investment: AI-driven robots. AI enabled these robots to interpret both verbal and written communication and convert those inputs into actions, or to process visual data (from camera feeds) and identify scenes and objects to interact with. As demonstrated by Peter's team, these robots had mastered the skill of object manipulation. After a period of over seven years, we had begun deploying numerous robot units throughout various Google facilities. A single model of robot was capable of performing a variety of tasks, including autonomously cleaning tables in dining areas, checking conference rooms, sorting waste, and more.

In January 2023, a couple of months following OpenAI's launch of ChatGPT, Google decided to discontinue its Everyday Robots project due to financial constraints. The technology and a few team members were absorbed by Google DeepMind for further study. Despite the significant expenses and extended timelines, this move surprised all parties involved.

A Critical National Priority

In 1970, the global ratio of working-age individuals to those aged 64 and above was 10 to 1. By 2050, it is expected that this ratio will drop to less than 4 to 1. The world is facing a shortage of labor. Questions arise such as who will provide care for the aging population? Who will staff our factories, hospitals, and eateries? Who will be behind the wheel of trucks and taxis? Nations like Japan, China, and South Korea are acutely aware of the urgency of this issue. For these countries, the adoption of robotics is not a matter of choice but of necessity. They have prioritized the development of robotic technologies as a matter of national importance.

Integrating artificial intelligence into tangible, real-world entities presents significant challenges related to national defense and offers vast potential for economic growth. Should a major tech corporation such as Google opt out of pursuing ambitious projects like developing AI-driven robots designed to enhance and support future workforces, one wonders who might take on such ventures. Can we expect Silicon Valley or other innovative hubs to rise to the occasion, especially when it comes to securing the necessary, enduring financial backing? I'm skeptical. The project dubbed Everyday Robots was labeled a "moonshot" precisely because the creation of such intricate systems on a grand scale surpasses the endurance typically displayed by startups reliant on venture capital funding. Although the United States currently leads in AI technology, forging its robotic counterparts demands expertise and facilities that other countries, notably China, are already excelling in.

The robots were late in arriving to assist my mother. She died in the early part of 2021. Our regular talks during her last days reinforced my belief that an improved iteration of what we initiated at Everyday Robots is on the horizon. Indeed, it's needed more urgently than ever. Thus, we must consider: What steps must be taken for this transformation and future to unfold? I continue to be intrigued and worried.

Share your thoughts on this piece by sending a letter to the editor at mail@wired.com.

Explore Similar Content …

Dive into the Political Laboratory: Subscribe to our newsletter and tune into our podcast

Exploring the outcomes of providing individuals with unconditional cash assistance

Weight loss isn't guaranteed for everyone taking Ozempic

The Pentagon is planning to allocate $141 billion towards the development of an apocalyptic device.

Come along to the Energy Tech Summit happening on October 10th in Berlin.

Additional Content from WIRED

Insights and Tutorials

© 2024 Condé Nast. All rights are reserved. WIRED might receive a share of revenue from products bought via our website, as a component of our Affiliate Agreements with retail partners. The content on this website is protected and cannot be copied, shared, transmitted, stored, or used in any form without explicit written consent from Condé Nast. Advertising Options

Choose a global website


Discover more from Automobilnews News - The first AI News Portal world wide

Subscribe to get the latest posts sent to your email.

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

SUBSCRIBE FOR FREE

Advertisement
Cars & Concepts34 mins ago

Ford’s V8 Rebellion: Clinging to the Mustang’s Roar Amidst the Electric Shift

Business1 hour ago

Leveraging the Yuan: A Strategic Move for Hong Kong to Reclaim Economic Supremacy Over Shenzhen

Business2 hours ago

Saudi Arabia Greenlights First ETFs Tracking Hong Kong-Listed Equities: A Pioneering Move in Middle East Capital Markets

Business2 hours ago

Apple’s iPhone 16 Faces Discounted Sales in China Amid AI Delay and Rising Huawei Competition

Business3 hours ago

Chinese Influencer ‘Crazy Xiaoyangge’ Under Investigation for Misleading Mooncake Claims: An Inside Look at Anhui Province’s Probe into E-commerce Practices

Business3 hours ago

DBS Records 300% Jump in Semi-Liquid Fund Sales in Greater China as Investors Seek Higher Yields and Diversify into US and Europe

AI3 hours ago

Unveiling the Future of Computing: A Deep Dive into Microsoft Copilot+ PCs

Business4 hours ago

Sun Hung Kai Ends Price War in Yuen Long, Betting on Rate Cuts to Attract Homebuyers: A Shift in Hong Kong’s Property Market Strategy

AI4 hours ago

OpenAI’s ‘Strawberry’ Model Sparks Controversy: Users Warned Against Probing its Reasoning Abilities

Business4 hours ago

China’s Crypto Clampdown: A Threat to the Booming Gaming Industry?

Politics4 hours ago

UK Foreign Policy to Prioritize Climate Action, David Lammy Declares: A Shift Towards Global Leadership Amid Fiscal Challenges

Business5 hours ago

China’s EV Industry in Peril: High R&D Costs and Price Wars Threaten Profitability Amidst the Launch of Over 50 New Models

Politics5 hours ago

Back to the Office: Amazon Mandates Full-Time Office Attendance, Sparking Debate Amid Flexible Work Trends

Business5 hours ago

Syngenta’s Expansion in China: Aiming for Food Security and Sustainability through Agricultural Technology

Politics5 hours ago

Prime Minister Sir Keir Starmer ‘Angry’ as Early Prison Release Scheme Sparks Controversy and Celebrations

Business6 hours ago

Hong Kong Soars in Cryptocurrency Adoption: Climbing 17 Spots in Global Rankings Amid Regulatory Clarity

Politics6 hours ago

Gender Quota Plans for Welsh Parliament Dropped Amid Legal Concerns

Business6 hours ago

Microsoft, BlackRock, and Nvidia Join Forces with Abu Dhabi’s MGX for a $30 Billion AI and Data Infrastructure Investment Drive

Politics2 months ago

News Outlet Clears Sacked Welsh Minister in Leak Scandal Amidst Ongoing Political Turmoil

Moto GP4 months ago

Enea Bastianini’s Bold Stand Against MotoGP Penalties Sparks Debate: A Dive into the Controversial Catalan GP Decision

Sports4 months ago

Leclerc Conquers Monaco: Home Victory Breaks Personal Curse and Delivers Emotional Triumph

Moto GP4 months ago

Aleix Espargaro’s Valiant Battle in Catalunya: A Lion’s Heart Against Marc Marquez’s Precision

Moto GP4 months ago

Raul Fernandez Grapples with Rear Tyre Woes Despite Strong Performance at Catalunya MotoGP

Sports4 months ago

Verstappen Identifies Sole Positive Amidst Red Bull’s Monaco Struggles: A Weekend to Reflect and Improve

Moto GP4 months ago

Joan Mir’s Tough Ride in Catalunya: Honda’s New Engine Configuration Fails to Impress

Sports4 months ago

Leclerc Triumphs at Home: 2024 Monaco Grand Prix Round 8 Victory and Highlights

Sports4 months ago

Leclerc’s Monaco Triumph Cuts Verstappen’s Lead: F1 Championship Standings Shakeup After 2024 Monaco GP

Sports4 months ago

Perez Shaken and Surprised: Calls for Penalty After Dramatic Monaco Crash with Magnussen

Sports4 months ago

Gasly Condemns Ocon’s Aggressive Move in Monaco Clash: Team Harmony and Future Strategies at Stake

Business4 months ago

Driving Success: Mastering the Fast Lane of Vehicle Manufacturing, Automotive Sales, and Aftermarket Services

Cars & Concepts2 months ago

Chevrolet Unleashes American Powerhouse: The 2025 Corvette ZR1 with Over 1,000 HP

Business4 months ago

Shifting Gears for Success: Exploring the Future of the Automobile Industry through Vehicle Manufacturing, Sales, and Advanced Technologies

AI4 months ago

Revolutionizing the Future: How Leading AI Innovations Like DaVinci-AI.de and AI-AllCreator.com Are Redefining Industries

Business4 months ago

Driving Success in the Fast Lane: Mastering Market Trends, Technological Innovations, and Strategic Excellence in the Automobile Industry

Tech4 months ago

Driving the Future: Exploring Top Innovations in Automotive Technology for Enhanced Safety, Efficiency, and Connectivity

Mobility Report4 months ago

**”SkyDrive’s Ascent: Suzuki Propels Japan’s Leading eVTOL Hope into the Global Air Mobility Arena”**

V12 AI REVOLUTION COMMING SOON !

Get ready for a groundbreaking shift in the world of artificial intelligence as the V12 AI Revolution is on the horizon

SPORT NEWS

Business NEWS

Advertisement

POLITCS NEWS

Chatten Sie mit uns

Hallo! Wie kann ich Ihnen helfen?

Discover more from Automobilnews News - The first AI News Portal world wide

Subscribe now to keep reading and get access to the full archive.

Continue reading

×