Stanford’s 2025 Congressional Boot Camp on Artificial Intelligence brought together senior, bipartisan congressional staffers and some of the most influential voices in AI from industry, academia, and policy. Across two major sessions — The Role of Business: Policy Implications of Industry Leadership in AI and Charting the Future: Open Innovation in an Era of Global AI Competition — the conversation wrestled with a central challenge: how to sustain rapid AI innovation while protecting societal interests, national security, economic competitiveness, and democratic values.
The national security dimension was never far from the surface. In a global landscape where AI capabilities can be dual-use — advancing both civilian productivity and military strength — the question was not only how the U.S. can lead in innovation, but who will lead, under what values, and with what safeguards. This tension — between openness, competitiveness, and security — ran like a current through the discussions.
Session 1: The Role of Business — Policy Implications of Industry Leadership in AI
The first session opened with a clear message from Navrina Singh, CEO of Credo AI: governance should be seen not as a brake on innovation, but as an accelerator when designed well. Singh argued for a context- and use case–specific AI standard — akin to SOC 2 in cybersecurity — that would give startups the ability to integrate into large enterprises more quickly by building trust with buyers and regulators. This framing turned the compliance debate on its head, positioning governance as a competitive advantage rather than a hurdle.
Matt Perault of Andreessen Horowitz took a different angle, warning that most development-focused regulations unintentionally tilt the playing field toward large incumbents. Startups, he noted, often lack dedicated legal and policy staff, making them ill-equipped to absorb high compliance costs. Instead of regulating the process of building AI, Perault called for policies that target harmful uses directly, arguing that such an approach would be more enforceable and less damaging to innovation. The national security parallel was implicit: if harmful uses such as cyberattacks, misinformation campaigns, or adversarial military applications are the real threats, then policy should focus directly on preventing and punishing them — not on stifling the development capacity that enables U.S. competitiveness.
From Google’s perspective, represented by Alice Friend, the focus was on infrastructure. Friend emphasized the scale of economic opportunity — citing estimates of an additional $4 trillion in U.S. GDP by 2030 from AI adoption — but warned that the country’s energy and transmission capacity must expand to match the demand from data centers. Google alone, she said, expects to spend $85 billion in capital expenditures in 2025, much of it directed at ensuring the energy supply keeps pace, with a push toward diversification that includes geothermal, nuclear, and other clean sources. The subtext: without reliable, resilient, and secure infrastructure, the U.S. risks both economic and strategic vulnerabilities, especially in a future where AI-enabled systems underpin critical national capabilities.
Sarah Guo, founder of Conviction, urged policymakers and investors to look beyond incremental plays that mimic frontier models. The true prize, she argued, lies in sectors where productivity has stagnated for decades — notably healthcare and education — and where AI can unlock transformative gains. In her view, the ability of the U.S. to maintain leadership in AI is not only about beating competitors in raw model performance but also about demonstrating transformative societal benefits that strengthen domestic resilience — an underappreciated component of long-term national security.
The discussion underscored several key tensions: the risk of regulatory frameworks that entrench incumbents, the infrastructure bottlenecks created by AI’s compute demands, and the choice between chasing headline-grabbing general models versus targeting under-innovated, high-value sectors. Each of these choices, the panelists implied, has security implications, whether in terms of economic stability, technological sovereignty, or strategic deterrence.
Session 2: Charting the Future — Open Innovation in an Era of Global AI Competition
The second session turned to the geopolitical and structural dimensions of AI leadership, where national security considerations came into sharp focus. Fei-Fei Li, co-director of Stanford HAI, made a forceful case for open systems, arguing they are critical not just for research and talent development but for maintaining U.S. competitiveness. She pointed out that universities cannot independently build and train the largest AI models, but open access to models enables scientific discovery, drug development, and advanced training for the next generation of AI researchers. Li also reminded the audience that China’s long-standing investment in STEM education — decades in the making — has been a key factor in its rapid progress in AI, underscoring that talent development is as much a security imperative as a scientific one.
Sarah Friar, CFO of OpenAI, provided an inside look at the company’s approach to openness. OpenAI’s new “one-behind-frontier” open-weight model is intended to strike a balance between broad access and risk management. Friar spoke candidly about the staggering infrastructure challenges ahead, projecting a move toward 10 gigawatts of compute — potentially requiring half a trillion dollars of investment — and noting that the U.S. private sector has largely shouldered these costs without direct public funding. She framed the release of open-weight models not only as a mission-aligned decision but as part of a broader democratic technology strategy — one that ensures AI advances are shaped in environments where transparency, accountability, and rights are respected, a subtle but direct counter to authoritarian approaches.
Condoleezza Rice, drawing on her national security background, was blunt about the stakes. In her view, the U.S. faces a two-player race at the frontier — with China as the primary challenger — and must “run hard and fast” to maintain leadership. She cited China’s civil-military fusion strategy, which channels technological advances directly into military capabilities, as a reason why slowing down for overly cautious regulation could be strategically costly. Yet she also acknowledged that openness, when paired with speed and investment, could be a decisive U.S. advantage, both by leveraging distributed innovation and by building global trust in democratic-led AI development.
Throughout the discussion, the speakers grappled with how openness interacts with national security, the limits of export controls, and the role of democratic alliances in shaping global AI norms. There was a clear throughline: leadership in AI is no longer just an economic competition — it is a matter of strategic positioning in the emerging geopolitical order.
Cross-Cutting Themes
Across both sessions, several themes emerged that cut across industry, policy, academia, and security.
Regulation should be targeted and contextual
Speakers warned against one-size-fits-all rules. In national security terms, overbroad regulation risks handicapping domestic innovators while leaving adversaries unimpeded.
Infrastructure is both an enabler and a vulnerability
The scale of compute and energy needed for frontier AI is unprecedented, and gaps in capacity could become strategic liabilities.
Openness as a competitive lever
From model weights to research collaborations, openness was framed not just as an academic ideal but as a way to rally democratic alliances and accelerate innovation in friendly nations, countering authoritarian gains.
STEM education as a security investment
The U.S. has underinvested in its talent pipeline for decades. Leveraging AI itself as a teaching tool — and ensuring educators are trained to use it — could be a force multiplier for both innovation and national security readiness.
Conclusion
The boot camp revealed an unusual degree of alignment between industry, academia, and policy on several fronts, but the national security undertone was unmistakable. Speakers from across the spectrum recognized that U.S. AI leadership depends on coupling innovation speed with targeted governance, investing in both compute infrastructure and human capital, and ensuring that democratic values — and not authoritarian norms — guide the development of the most powerful general-purpose technology of our time.
For congressional staff, the takeaway was clear: regulate with precision, invest with urgency, and build bridges between the worlds of policy, technology, and security. The coming years will demand decisions that shape not only the U.S. economy, but its strategic position in the world. Whether through open innovation, smarter infrastructure policy, or AI-powered education, the race is on — and in this race, the stakes are nothing less than both prosperity and security in the AI age.
