Last spring, Chinese and American envoys sat across from one another in a conference room in Geneva for what both governments referred to as a “candid and constructive” meeting. They were discussing artificial intelligence, including its hazards, risks, and who gets to set the rules. You quickly realize that they weren’t actually describing the same conversation when you read the separate summaries that each side subsequently released. Washington mentioned China’s “misuse of AI.” Beijing took issue with the United States’ “restrictions and pressure.” The diplomatic language remained unaltered. It wasn’t the underlying tension.
By geopolitical standards, that meeting—the first official AI discussion between the two biggest economies in the world—was insignificant. There was no signed contract. There was no unified statement. However, there’s a sense that it signaled a significant turning point in the race to control artificial intelligence, which went from being merely a technological tale to something older and more recognizable: a struggle between superpowers, each of which is certain that the other cannot be trusted with the future.
| Category | Details |
|---|---|
| Topic | The geopolitical and economic competition to lead in artificial intelligence development and regulation |
| Primary actors | United States, China, European Union — the three dominant forces shaping global AI policy |
| US proposed AI funding | $32 billion in AI research investment requested by bipartisan Senate group to maintain edge over China |
| Key private sector milestone | Scale AI raised $1 billion in a recent funding round, reaching a valuation of $13.8 billion; CoreWeave valued at $19 billion |
| EU regulatory approach | Rights-driven; pursuing comprehensive AI Act with proposed bans on predictive policing and mandatory transparency for high-risk systems |
| China’s regulatory model | State-driven; government maintains central control over AI development and deployment domestically |
| First US–China AI dialogue | Held in Geneva, 2024 — both sides described talks as constructive, though core tensions over restrictions and misuse remained unresolved |
| Leading chipmaker | Nvidia — Q2 2024 revenue forecast of approximately $28 billion, driven almost entirely by AI computing demand |
| Key regulatory framework book | Digital Empires by Anu Bradford (Columbia Law) — examines market-driven, state-driven, and rights-driven models colliding globally |
| Military dimension | AI’s role in defense is increasingly compared to the nuclear arms race — automated weapons systems raising strategic stability concerns |
| Workforce impact | KPMG Global CDO: “Humans who use AI may replace those who don’t” — skills training identified as the critical near-term challenge |
The scope of the issues involved has been rapidly becoming clear. Majority Leader Chuck Schumer led a bipartisan group of US senators who urged Congress to approve $32 billion in funding for AI research. Staying ahead of China was the stated objective. For years, Beijing has been integrating AI ambition into its larger industrial and military strategy throughout the Pacific, viewing technological leadership as roughly equivalent to global order leadership. In the meantime, negotiators in the European Parliament have been engaged in a different kind of struggle in Brussels, not over who will win the race but rather over what the race should even be permitted to look like.
Nvidia serves almost as a gauge for how serious the world has become about this, so it’s worth taking a moment to consider the company. The demand for AI computing was the primary driver of its second-quarter revenue forecast, which came in at about $28 billion, well above analyst expectations. The stock shot up. Investors flocked in. The future is not represented by that number in any nebulous, abstract way. It depicts factories operating, chips being transported, and governments estimating that lagging behind in AI infrastructure could be compared to lagging behind in steel or oil a century ago.

The European perspective on all of this has been distinctly different, and it’s likely more fascinating than it is given credit for. Beijing and Washington define winning in the AI race differently than Brussels does. The EU’s draft AI Act is focused on rights rather than speed and was developed through “long and tedious” negotiations, according to one senior negotiator. Internal discussions about what constitutes high-risk AI were intense. Should predictive policing be completely prohibited? What happens when caution and innovation are at odds with each other? It took months to resolve the ideological divide within the European Parliament alone, which illustrates how difficult these issues are when people are genuinely attempting to provide honest answers rather than merely stating their opinions.
Anu Bradford, a professor of law at Columbia University who has studied this three-way collision for years, characterizes it as a conflict between three regulatory empires: Europe’s rights-based approach, China’s state-driven model, and America’s market-driven model. Each illustrates a genuine aspect of how those societies view the proper use of power. Additionally, these models are being exported more and more. Southeast Asian, African, and Latin American nations that are observing from the outside are being asked, sometimes subtly, which model they will base their own digital futures on. A chip export ban is not the same as that kind of competition, but in the long run, it might be more significant.
The fact that technology is advancing more quickly than the organizations attempting to control it is what makes this era truly peculiar. While attempting to limit the technology, those who are writing the regulations are also learning about it. In every government, there are no clear answers. The KPMG executive who stated at VivaTech in Paris that people who use AI might eventually replace those who don’t was making a point about workforce training, but there’s a more serious version of that statement that applies to both individuals and nations.
It’s difficult to overlook the weight that the military component adds to all of this. Because the strategic reasoning seems similar, analysts observing the development of AI-enabled weapons systems have begun to compare it to the early nuclear era, not because the results are necessarily the same. Both sides are in a race, neither is fully aware of the other’s capabilities, and stability depends more on mutual uncertainty than on treaties. The validity of that analogy is still up for debate. However, serious people are not making careless comparisons.
It’s difficult to ignore the fact that this debate isn’t actually about technology as you watch it develop. In a world that is rearranging itself around a capability that no one fully controls yet, the question is who gets to set the rules. Geneva was not a conclusion, but a start. The more difficult discussions are still to come.
