Securing U.S. AI Leadership While Preempting Strategic Drift

Securing U.S. AI Leadership While Preempting Strategic Drift

In order to obtain NVEU status, T2 applicants are “strongly encouraged” to first secure a government-to-government assurance before seeking approval from four federal agencies—the Departments of Commerce, State, Defense, and Energy. While UVEU applicants must also clear this multi-agency review, T2 firms face structural disadvantages compared to T1 hyperscalers and neoclouds, which already operate within established compliance frameworks, observability systems, and security protocols, making many of the requirements routine. In addition, T2 NVEU applicants will also face heightened scrutiny in demonstrating efforts to sever supply chain dependencies with China. While not all countries are equally exposed, many have integrated Chinese networking equipment and hardware into their infrastructure due to cost advantages, making compliance both technically complex and costly.

T3 Access and Restrictions

There will be a presumption of denial for all T3 countries—in regulatory parlance, that means an automatic “no” for any export license applications involving T3 entities. None of the streamlined “VEU” routes (universal or national) apply. In principle, a T3 company cannot just waltz through licensing hoops—it is effectively shut out.

The Uneven Impact of the AI Diffusion Rule

The impact of the rule will, unsurprisingly, be uneven. While it is tempting to cast T1 as the “winners” and T2 as the “losers,” this would be an oversimplification. Even within each tier, the outcomes will be far from uniform—some T1 countries will gain disproportionately, while others in T2 will feel the brunt of the restrictions more acutely.

Surprisingly, some T2 countries may benefit under the new rules. For instance, entities in parts of the Middle East and Central Asia—where any chip purchase previously required an export license—now have pathways to access controlled chips without a license, provided they stay within the bounds of low-volume purchase pathways (LPPs) or individual country allocations. Countries like the UAE, which spent months negotiating deals such as the Microsoft-G42 partnership and has already secured a memorandum of understanding with the United States, will welcome the added clarity and are well-positioned to obtain an NVEU status. Conversely, other T2 nations that previously faced no restrictions will now find their access significantly tightened.

Broadly speaking, the new rule is likely to be a net negative for T2 nations with significant AI data center capacity pipelines, particularly if that capacity is not slated for use by U.S. hyperscalers.

Biggest Losers

Based on SemiAnalysis data, Malaysia will be the hardest-hit country. Its data center capacity has been surging, rising from just 100 megawatts (MW) in 2023 to a projected ~3.5 GW by 2027, and positioning it to become the world’s third-largest data center country, behind the United States and China, by 2026.

Nearly half of Malaysia’s projected 2027 capacity is optimized for cutting-edge NVIDIA AI accelerators, with facilities capable of supporting power densities of up to 130 kilowatts (kW) per rack. This has drawn major investments, including NVIDIA’s $4.3 billion partnership with Malaysian conglomerate YTL to build supercomputing facilities and cloud AI services.

Malaysia is likely to get caught in the crosshairs of the rule as it become a key destination for Chinese colocation and leasing activities. By 2027, ByteDance, the parent company of TikTok, is expected to lease 628 MW of its total data center capacity. Similarly, DayOne (formerly GDS) is adding 415 MW by 2026, much of which is leased to ByteDance.

Oracle finds itself in a bind—hence the escalating blog posts from Executive Vice President Ken Glueck on December 19, January 5, and February 4. Oracle’s strategy relied heavily on Malaysia, with $6.5 billion in planned investments that will likely exceed the new 7 percent country cap for T2 nations. While it could theoretically rebalance through aggressive expansion in T1 countries, this would require significant new investment and strategic repositioning.

After Malaysia, India will be the hardest-hit county. However, unlike Malaysia, India’s AI infrastructure is more closely tied to U.S. hyperscalers, offering some insulation from the restrictions. Still, with ~3 GW of planned data center capacity, India closely rivals Malaysia in scale, but has even larger ambitions. Mukesh Ambani, India’s richest person and chairman of Reliance Industries, recently announced plans to build a 3 GW mega data center campus in Jamnagar, Gujarat, which would be the world’s largest, with a projected investment of $20–30 billion. The facility is intended for AI workloads and is expected to rely on NVIDIA’s leading-edge Blackwell AI processors. The new restrictions threaten to derail these initiatives, potentially thwarting India’s aspirations.

Beyond Malaysia, Southeast Asia overall is likely to suffer.

Singapore was an early mover in the data center boom of the 2000s, leveraging its connectivity infrastructure. But its energy constraints quickly caught up. In 2019, Singapore imposed a moratorium on new data centers after projections showed they could consume 12 percent of the nation’s electricity by 2030. Since then, Singapore has allocated modest expansions—80 MW for four new data centers by 2023 and a 2024 pledge to add 300 MW, prioritizing green energy options. Given its fundamental constraints, however, Singapore’s growth will remain capped, relying on reshuffling its deck to retire legacy facilities and optimize existing capacity.

Indonesia, meanwhile, is less handicapped by similar limitations and has been muscling in on the AI data center space. It recently completed Phase 1 of the BDx CGK4 campus in Jatiluhur—a renewable-powered AI data center park, scalable up to 500 MW, offering high power density of up to 120 kW per rack, liquid cooling technologies, and high-speed connectivity to meet the demands of AI workloads.

Indonesia’s ample land, energy, and ability to leverage renewable power have also made it a natural magnet for Chinese investment, with Tencent Holdings pledging $500 million to develop its third data center in the country by 2030.

Brazil will be the most affected country in Latin America and has positioned itself as a regional AI data center powerhouse. Leading the charge is Scala Data Centers, whose São Paulo campus is set to expand beyond 350 MW, but its real centerpiece is the so-called AI City—a proposed 4.75 GW campus that will cost upward of $90 billion. Ordinarily, such a proposal would be laughed out of the room, but the proposed site is located next to an idle 3 GW substation and surrounded by untapped wind and hydro power, resources that few other nations can match. With the right financial backing, it could be the largest AI data center in the world. The new U.S. restrictions would, however, significantly undercut these plans.

While the UAE and Saudi Arabia show relatively small confirmed capacities through 2027, this understates their long-term aspirations, with several gigawatt-scale projects that extend well beyond 2027. Most, however, have not yet started earthworks. The Gulf states possess two distinct advantages: huge energy reserves and the ability to deploy state-backed capital with near-limitless patience to build the next generation of AI data centers.

Near-term development is led by G42/Khazna, with 406 MW of planned capacity across 13 campuses, including a flagship 100 MW Dubai Ajman campus. However, its true ambitions are reflected in its longer-term plans. According to SemiAnalysis, G42/Khazna is planning a staggering 5 GW aggregate pipeline across the Middle East, while Google is eyeing a 3 GW pipeline near Saudi Arabia’s King Salman Energy Park.

Biggest Winners

While the AI Diffusion Rule does not create outright “winners,” apart from the United States, which by design will be the primary beneficiary, T1 countries’ AI infrastructure plans will not be at risk and can proceed without regulatory headwinds. In addition, they are likely to siphon off deployments originally earmarked for T2 nations now caught in the rule’s constraints. Based on SemiAnalysis data, Australia emerges as a standout case among T1 nations, followed by the United Kingdom, Japan, Ireland, Germany, Canada, South Korea, the Netherlands, and Spain. In fact, nine out of eighteen T1 countries have more than 1 GW of planned AI data center capacity, compared to just four countries across all T2 countries, with five T1 countries exceeding 2 GW of planned capacity.

Australia will likely be the largest beneficiary of the rule, particularly in the Asia-Pacific (APAC) region, with its ~3 GW of planned capacity. Australia’s energy landscape provides a crucial competitive advantage. Unlike Japan and South Korea, which rely heavily on imported energy, Australia is one of the world’s largest energy exporters. Australia’s consistent year-round solar irradiation also translates directly to data center economics, enabling cost-effective power purchase agreements (PPAs) for data center operators.

Moreover, colocation accounts for approximately 75 percent of Australia’s market. While hyperscalers like Microsoft, Google, and AWS can self-build and operate their own facilities, colocation providers offer a compelling alternative: ready-to-use data center facilities that cloud providers can quickly lease, which means lower capital requirements and faster time to market.

The Australian market is anchored by three mature colocation providers that rank among the world’s best—the “three Goliaths”: AirTrunk, NextDC, and Canberra Data Centers (CDC). NextDC’s 550 MW mega campus demonstrates its capacity for large-scale development, while the ability of AirTrunk, a homegrown success story, to deploy direct-to-chip liquid cooling is particularly significant as the industry faces a big transition: NVIDIA’s upcoming Blackwell AI chips (GB200) require this advanced cooling technology to handle power densities up to 130 kW per rack. Many data center operators globally, including tech giant Meta, have had to completely redesign facilities to accommodate these new requirements.

What truly distinguishes the “three Goliaths” is their focus—instead of serving a broad mix of enterprise clients, they primarily build for hyperscale cloud providers—and already well understand their exacting technical requirements and scaling needs. Combined with its privileged T1 status under the new rules, Australia is ideally positioned to capture displaced AI computing demand from restricted Southeast Asian markets like Malaysia.

The Theory of Success for the United States

The AI Diffusion Rule will funnel T2 countries toward U.S. hyperscalers and allied T1 neoclouds as the default gateway to advanced compute, which the U.S. largely controls, creating a de facto lock-in of AI infrastructure worldwide.

The rule is not just about controlling who gets access to U.S. compute; it is also about forcing countries to choose sides. In order to secure U.S. compute, T2 NVEU applicants will also have to sever supply chain dependencies with China—i.e., they need to declare for Washington and eject Beijing, or risk falling behind. But company-level decoupling will not suffice; securing U.S. approval will require formal government assurances at the national level to get into Washington’s good graces.

Creating Economic Pressure: The Approval Gap Between UVEU and NVEU

While the path to securing NVEU status appears straightforward, as highlighted previously, the process presents significant practical hurdles that could create dangerous delays for countries with ambitious AI data center plans.

Many projects, like those in Malaysia, are purpose-built for top-end AI training, with ultra-dense power racks, advanced liquid cooling, and infrastructure optimized for next-generation chips like NVIDIA’s Blackwell line. Yet, even with an NVEU license, TPP limitations and supply constraints may prevent operators from acquiring enough state-of-the-art GPUs to fill those racks. Meanwhile, Tier 1 license holders face a 7 percent limit on how much of their total compute can be deployed in any single country, creating further uncertainty for T2 data centers looking to secure large clients.

Operators could repurpose racks for older GPUs, but these do not require the high-density cooling and power infrastructure already in place, turning specialized, high-capex facilities into underutilized white elephants. A retrofit for lower-performance workloads also undermines the original investment altogether.

Moreover, the economics are unforgiving. Modern data center projects require billions in upfront capital expenditure—from land acquisition and power infrastructure to advanced cooling systems. These investments are typically highly leveraged, with financing structures that assume rapid customer deployment to generate cash flow for debt service.

Even short delays awaiting NVEU approval leave these specialized facilities vulnerable. Loan payments may come due without corresponding revenue from AI workloads, but incurring the same fixed costs for maintenance, security, and staff. For operators and investors, the choice will be clear. Unfilled racks mean certain losses. They will likely lease to T1 hyperscalers who can deploy immediately under UVEU status, or risk their facilities becoming stranded assets.

Lock-In Effects

The framework effectively drives T2 compute capacity toward U.S. hyperscalers and T1 neoclouds, which gain preferential access to T2 markets. But the implications go beyond just access to compute. Hyperscalers and neoclouds also offer stable service-level agreements and robust developer ecosystems (e.g., CUDA libraries), with better track records for uptime, enterprise-grade support, and compliance. Over time, the cost and friction of switching to alternative providers like Huawei would become prohibitively high. Migrating datasets, restructuring technical operations, retraining staff, and rebuilding applications would impose significant costs and operational challenges, reinforcing lock-in effects.

Potential Response from China and Allies

The Hardware Gap: China’s Compute Gap

The assumption that China can immediately compensate for U.S. controls with domestic alternatives is not supportable with current evidence. In the near term, China faces significant constraints in both the quality and quantity of chips it can produce.

Quality Gap: Huawei’s Ascend series and other Chinese GPU alternatives lag behind NVIDIA by one to two generations. According to Chris Miller, Huawei’s most advanced AI chip, the Ascend 910B, achieves only 280–400 TeraFLOPS compared to 2,250 TeraFLOPS for NVIDIA’s most advanced Blackwell chips. This performance differential of 5.6–8x is reflected in real-world adoption. According to Epoch AI, of 263 documented AI models where hardware was known, only two used Huawei Ascend chips, while 31 Chinese organizations relied on NVIDIA hardware. Even DeepSeek trained its models on NVIDIA H800s.

Quantity Gap: SemiAnalysis projects that China will produce just 1.8 million Huawei Ascend 910B/910C GPUs by the end of 2025, while U.S. AI labs and hyperscalers are projected to deploy 14.3 million AI accelerators in the United States, which are significantly more performant, suggesting an even larger compute gap at the aggregate national level. This limited fabrication capacity means that China will likely prioritize domestic needs, limiting their ability to offer a credible alternative to U.S. technology in global markets.

It is worth cautioning that this technology gap may not be permanent. In the medium to long term, forced localization and state-backed capital might narrow China’s performance gap from one generation behind to on par or slightly behind.

Second-Tier Status, First-Tier Ambitions

T2 countries will, of course, publicly acquiesce to U.S. restrictions. But diplomatic accommodation should not be mistaken for genuine alignment. India, the Gulf states, and other well-capitalized AI aspirants will bristle in private that their AI ambitions depend on Washington’s goodwill. The fact that T2 nations are being deliberately kept a generation behind the frontier will also rankle. It is a public, institutionalized reminder that no matter how much they invest, they are not allowed to be first-tier players. 

The False Binary: The United States vs. China vs. a Third Way

There is a tendency to view AI geopolitics through a Cold War–style binary—that nations must either align with the United States or drift into China’s orbit. But this overlooks a third possibility that T2 nations, far from being passive satellites, may seek their own path. T2 nations have no desire to be entirely captive to either the United States or China.

2019 is often cited as China’s “9/11 moment,” when the Huawei and ZTE sanctions forced Beijing to embark on a massive technological self-sufficiency push. The AI Diffusion Rule may trigger a similar reckoning for T2 countries. One may argue that the two are not analogous. After all, Beijing was explicitly cut off, while T2 countries still receive a permissive allocation under the new framework.

But that misses the underlying dynamic. No country makes economic security decisions on the basis of GPUs alone. Increasingly muscular U.S. economic security measures, especially against close allies like Canada, will force capitals to rethink long-term dependencies on U.S. technology.

Short-Term Adjustments, Long-Term Realignment

Of course, this will not happen overnight. In the near term, T2 nations will maximize their initial allocation of 50,000 H100-equivalent GPUs. Given that access is first come, first served, sovereign AI initiatives will ramp up—governments will ensure national priorities dictate GPU access, rather than individual firms. We may also see the rise of regional compute corridors, with nations pooling resources to overcome individual capacity limits.

In the immediate time frame, these countries will likely procure whatever GPUs they can—via U.S.-validated entities (UVEUs) or the narrower NVEU status. As access to U.S. compute becomes increasingly conditional, however, or simply in anticipation of future friction, well-resourced nations will start to invest in their own high-performance computing infrastructure, aiming not to match the raw performance of top-end GPUs—a feat even China struggles with—but to build a functional, mid-tier alternative capable of supporting AI applications. Rather than attempting to replicate leading-edge GPUs, these collaborations could focus on specialized application-specific integrated circuits (ASICs) designed for narrow but critical industrial and commercial AI workloads.

RISC-V-based AI accelerators, designed collaboratively and manufactured in existing T2 facilities, present another pathway. Unlike proprietary architectures like x86 or Arm—both of which are subject to U.S. restrictions—RISC-V’s open-source nature allows nations to design their own AI accelerators. That said, RISC-V is not an immediate off-ramp. While Alibaba has developed XuanTie RISC-V cores, most RISC-V AI efforts remain in their early stages, requiring significant investment and development. Claiming it as a ready solution for compute sovereignty overstates its current capabilities. But dismissing it entirely would ignore incentives to fast-track progress.

If Washington was already concerned about China’s push for design-out, the risk now multiplies exponentially. By overplaying its hand, the United States creates a potential alternate compute stack, operating beyond American control.

The Open-Source Escape Valve

As highlighted earlier, the weights of any model trained using U.S.-controlled compute are subject to U.S. export restrictions if they exceed 10²⁶ FLOP—unless they are open-sourced. At first glance, this creates an incentive for T2 nations to lean into open-source frontier AI development to sidestep U.S. regulatory controls.

However, in practice, this escape valve may not be as open as it seems. T2 countries will likely lack the domestic compute capacity to train models beyond 10²⁶ FLOP, meaning they would need to rent compute. But this raises a liability risk—how can a T2 entity credibly prove, before training begins, that it will follow through on open-sourcing the model weights? Without a mechanism to verify intent pre-training, compute providers may simply deny access upfront rather than gamble on post-training compliance. As such, T2 players may find themselves effectively boxed out of both closed- and open-source frontier AI development.

But even if U.S. restrictions block T2 nations from training frontier open-source models, they may trigger a different kind of shift—an acceleration toward compute-efficient, open-source AI development. DeepSeek has already demonstrated that scarcity breeds optimization. What happens when the broader open-source world, united by shared limitations, begins running in the same direction? There is also a curious inversion at play. While the United States moves to lock down and consolidate global compute, China is positioning itself as the provider of last resort, offering high-quality, cost-effective open-source models that others can build on. Instead of needing access to U.S. AI infrastructure, developers could simply build atop China’s open-source stack.

T2 states do not need a formal conspiracy with Beijing. But their independent hedging efforts, driven by frustration with U.S. licensing constraints, may naturally dovetail with China’s open-source push. Instead of reinforcing a U.S.-led order, Washington’s grip could weaken and lead to a world where open-source autonomy, championed by China, becomes the default escape hatch.

The Fragile Equilibrium and Conditions for Unraveling

By tying advanced compute access to strict controls, Washington risks turning T2 countries into reluctant vassals who start looking for side doors at the earliest opportunity.

Regulatory partitions have a half-life. The policy will only hold as long as:

  • The compliance burden remains lower than the cost of switching;
  • Alternate compute stacks remain too inefficient to offer mid-tier alternative U.S. incumbents; and
  • China’s domestic AI stack develops more slowly than the United States anticipates.

The AI Diffusion Rule assumes that lock-in will hold because switching costs are high. But if history tells us anything, it is that whenever an access-restricted market grows large enough, the incentive to develop alternatives eventually outweighs the costs of remaining dependent.

Conclusion and Recommendations

It is more likely than not that the AI Diffusion Rule will be embraced and further expanded by the Trump administration. It aligns with a broader escalation of the China containment playbook, fits neatly into an “America First” approach, and serves as a powerful negotiating cudgel to compel T2 nations to align more closely with Washington’s AI agenda.

But splintering is not inevitable. There are viable policy pathways to preempt T2 strategic drift and allow the United States to maintain control over the global AI stack.

  • Establish Clear Graduation Requirements for T2 Countries: If full T1 status is not feasible, Washington should establish a Tier 2A classification for countries with significant AI infrastructure investments at risk of being stranded, and agree to increase export control enforcement efforts. This tier would receive higher country-level TPP allocations and streamlined NVEU approval processes. There could also be an annual revision of TPP thresholds, particularly as large-scale U.S. initiatives such as Project Stargate take off, to allay T2 country anxieties that that they are being deliberately outpaced and falling further behind the compute frontier.
  • Reassert U.S. Leadership in Open-Source AI to Counter China’s Inroads: The AI Diffusion Rule unintentionally incentivizes T2 nations to embrace open-source AI. Rather than cede this space, Washington should preemptively shape the open-source landscape to prevent China from becoming its de facto steward. This could take place by incentivizing international AI research collaborations through U.S. university public compute. This would offer structured, monitored access to AI infrastructure for vetted T2 researchers—allowing the United States to maintain influence over who contributes to the global open-source AI stack. In time, the United States could also develop certification standards for “trusted” open-source models that meet security requirements.
  • Modernizing BIS and Export Controls Enforcement: The streamlined licensing burden on BIS creates an opportunity to reallocate resources toward strengthened enforcement and tracking mechanisms that better prevent circumvention. This would dovetail with the growing digitization drive under the Trump administration. BIS could deploy automated TPP tracking systems that provide real-time visibility into GPU deployments, create standardized APIs for reporting and monitoring, and build automated early warning systems for potential diversion.

In some ways, the United States has crossed the Rubicon, and there is no retreat. China will continue its drive for AI self-sufficiency regardless of U.S. actions. But export controls are like self-replicating automata. They tend to expand with each iteration, creating new loopholes, countermeasures, and pressures for escalation. A more adaptive approach, however—one that balances U.S. leadership with credible pathways for allies—could allow Washington to have its cake and eat it too. If there is a better hand to play, now is the time to find it. And if there is anyone that thrives on breaking and remaking the playbook, it is President Trump.

Barath Harithas is a senior fellow in the Economics Program and Scholl Chair in International Business at the Center for Strategic and International Studies in Washington, D.C. The author is grateful to Catharine Mouradian, program manager and research associate in the Economics Program and Scholl Chair in International Business, for her valuable assistance on this paper.

This report is made possible by general support to CSIS. No direct sponsorship contributed to this report.

link

Leave a Reply

Your email address will not be published. Required fields are marked *