For six years, Christian Brose served as Senator John McCain's chief national security advisor, later becoming the youngest-ever staff director of the Senate Armed Services Committee. In that role, he gained access to classified briefings about the real balance of power between the US and China. What he saw alarmed him so profoundly that in 2020 he wrote "The Kill Chain"—a disturbing manifesto on how America is losing its military edge.
"The kill chain" is a basic military term describing the three steps of any combat operation: understand what's happening; decide what to do; deliver the strike. Brose's central thesis: America's military machine executes this chain too slowly, and the Pentagon's bureaucracy is incapable of implementing the technologies that could accelerate it.
After leaving the Senate, Brose became head of strategy at Anduril Industries—a defense startup currently building a 5-million-square-foot factory in Ohio for mass production of autonomous combat systems. In conversation with School of War podcast host Aaron McLean, he explains why Ukraine's lessons can't simply be copied for a war with China, and what needs to change before it's too late.
Aaron McLean: Today we're joined by Christian Brose—president and chief strategy officer at Anduril Industries. He has a long career in government service behind him: he was staff director of the Senate Armed Services Committee under Senator John McCain, worked at the State Department under Secretaries Rice and Powell. Christian, thank you for coming.
Christian Brose: Thanks for having me. I've been following your show for a while, happy to be here.
Aaron McLean: Plenty to discuss. I want to talk about the state of warfare in 2025 and looking ahead to 2026, discuss Anduril. But let's start with a biographical angle. You've been doing this for a quarter century—government service, defense industry, most of the time focused on defense issues. If you could go back to your 2009-2010 self and talk to that person—what would surprise him most about what you know now? What diverged from your assumptions back then about America's place in the world in terms of defense technology and strategy?
The Erosion of American Military Supremacy
Christian Brose: Great question. There's a personal answer and a professional one. Personal—if I'd told myself in 2009-2010 what I'd be doing in 2025, I'd never have believed it. Careers in Washington take unexpected turns.
In a broader sense, I'd be surprised by how significantly our military advantage has weakened. Back then it was taken for granted that America had something like military dominance. Yes, many were watching China, but the focus wasn't there—not in national security strategy, not in defense strategy. It was assumed what had been true for a very long time: we can go where we want, do what we want, we have military superiority over any adversary. It's remarkable how quickly that superiority has eroded—to the point where we now see parity in several areas. And this will become a defining feature of our world, especially in US-China relations.
Aaron McLean: Let's dig deeper. In a Wall Street Journal interview this year, you told journalist Kate O'Dell: "At every level, our conception of military power and the industrial base we've optimized to create it are systemically wrong." Let's unpack that. Specifically, how is our understanding of military power mistaken? Where are we losing technological advantage—beyond issues of mass production and depth of munitions stockpiles?
Christian Brose: These things are connected. After the Cold War, America was head and shoulders above its competitors. We believed that thanks to this superiority, we could rely on a model of military power that looks like a small number of highly sophisticated systems. Behind this was an assumption about the nature of war: if we find ourselves in a conflict, it won't last long—days or weeks. We won't expend much weaponry, won't lose many combat systems or people. We won't have to deal with a protracted conflict.
Now, looking at the war in Ukraine and the Middle East, we understand: these assumptions no longer hold. How do you prepare for a war that drags on for months and years? How do you counter competitors with comparable military capabilities? How do you sustain combat operations, industrial production, and force reconstitution in a protracted conflict? We're not ready for any of this. Our tools aren't designed for it.
The last 30-40 years have been a historical anomaly. If you look at how America thought about conflicts, about war, about the industrial base throughout its history and how it achieved success—it looked more like what I'm describing, not this anomalous period of colossal superiority.
Aaron McLean: Even the New York Times is now sounding the alarm on these issues.
Christian Brose: Imagine that.
Aaron McLean: I was surprised myself. I was hoping for at least a couple footnotes to your book, but apparently that's too much to ask.
Christian Brose: High praise indeed.
Aaron McLean: I couldn't find anything to disagree with. Excellent material. But was there a moment in your career when you first felt alarmed? You wrote "The Kill Chain," which developed these themes earlier than most. Was there a moment that shook you? Or were you from the beginning part of a small group of people who understood everything, only no one was listening?
Awakening in the Senate
Christian Brose: For me it was a gradual realization during my time in government, especially in the last four years of Senator McCain's life, when I was his staff director on the Armed Services Committee—from 2014 to 2018.
There's an old saying attributed to Kissinger: in government you spend intellectual capital, you don't accumulate it. My experience was the opposite. Every day people came to me with information—meeting after briefing, briefing after briefing. A real parade of information. The advantage of my position was that I saw the whole picture. I had a team dealing with individual areas—shipbuilding, aviation, ground forces. Everything was presented to me together. The Department of Defense works the same way: everyone comes with their piece, and I saw it all together and noticed patterns across all joint forces, across the entire defense program.
Gradually came the realization: we're not where we should be. Our advantage is eroding. Our programs are archaic. We're not moving fast enough to address the threat, and we're not leveraging the opportunities that technologies provide.
But the specific moment that tied it all together was a classified briefing that attempted to look at everything in aggregate and assess how "red" and "blue" compare. Let's just say: the conclusions were not encouraging. They resonated with my experience—I'd been seeing it in pieces, briefing after briefing, and finally saw it all together. This prompted Senator McCain and me to try to convey this picture to other committee members and the Senate. Most simply weren't paying attention, couldn't dive that deep. But when you do dive in—the picture across the totality of forces and programs is quite alarming.
Aaron McLean: Can you diagnose the problem as a whole—how did we get here? In "The Kill Chain" you start the analysis earlier than many others. Usually people count from the end of the Cold War and the "Last Supper"—the consolidation of the defense industry. You trace the problem into the Cold War itself, to the growth of bureaucracy. So, the highest echelons of power have been aware of the problem for over a decade. It's now 2025, and you can't say the trends have decisively reversed in our favor. What's the nature of the problem itself?
The Roots of Bureaucratic Paralysis
Christian Brose: Great question. After World War II—the massive mobilization of the industrial base and engagement of commercial industry, which won the war—we started well at the beginning of the Cold War. But then came the inevitable reaction: wasted money, irresponsibility. An attempt began to bureaucratize the "long twilight" of the Cold War.
This really took off under McNamara. How do you create centralized planning? If you look at the origins, it's striking how much this goes back to socialist, nearly communist theories—the primacy of the state, the need to regulate industry, eliminate the unpredictability of capitalism and replace it with centralized state planning. This is the PPBE system—Planning, Programming, Budgeting, Execution.
When we worked successfully in the 1970s-80s, it was because we bypassed this system—worked effectively in classified programs, creating what became the AirLand Battle doctrine, deep strike—everything that was demonstrated in the first Gulf War.
But after the Cold War, this system didn't change—there was no reason to change it. The system exists to this day. And the basic assumptions, clearly formulated in the 1950s-60s-70s, stated: innovation is a completed stage. We've already created the necessary military capabilities, now we just need to efficiently manage the industrial base. The very possibility of innovation, breakthroughs, technological change was excluded as unnecessary.
The problem today is precisely what this bureaucratic system considered overcome and unimportant: breakthrough technological changes, changes in the threat environment. We're trying to cope using centralized planning methods with forces the state doesn't control.
Most capital for research and development is now spent not by the state—not even close. Everything's in the private sector. The technologies driving us forward come from the private sector and commercial technologies, not from government laboratories. Yet the state still thinks in terms of control.
The challenge becomes increasingly acute: how do you create incentives for change and innovation in a monopsonistic environment—a market where the state is the only buyer? This is the opposite of what the system was designed for. The system is designed to formulate requirements, linearly transfer them to programs, budget, procure, and execute over 10-15 years. This is absolutely irrelevant to the world we live in—both technologically and in terms of threats.
How do you create alternative processes with opposite logic? Where requirements are unknown to us and will change rapidly, and we must build flexibility into our programs? How do you ensure flexibility in budgeting and procurement to quickly absorb lessons, connect and disconnect new capabilities? Everything the Ukrainians and Israelis are doing. In the best examples of American history, we did this too—when we were serious, when we faced such challenges.
This is the difficulty: how do you transition to wartime thinking and wartime readiness when the country operates with peacetime bureaucracy and peacetime mentality?
Aaron McLean: We'll come back to the Israelis and Ukrainians. But what in the US gives you hope? What glimpses do you see in the current administration's actions? And the flip side—what in bureaucratic practice or Congress needs to stop immediately?
Consensus Achieved—What Next?
Christian Brose: I'll focus on the positive. We're having a completely different conversation about these problems than we were five or six years ago when I was writing the book, let alone ten years ago. That's excellent news. Look at the degree of consensus—the New York Times ran a week-long investigation into everything we're talking about and generally took the right position. We've achieved something like consensus in an era when Republicans and Democrats can't agree that grass is green and the sky is blue.
The current administration's actions are also encouraging. The emphasis on procurement reform, on developing new capabilities and breakthrough changes—and this comes from top to bottom. A sense of urgency, understanding that you can't operate as usual—otherwise we'll lose. You need to do new things, take risks.
The question is: what will we do with all this enthusiasm and consensus? I'd focus less on obstacles because essentially there aren't any. This is perhaps a controversial thesis. We can always improve processes—requirements, procurement, budgeting. But I'd argue: we already have all the authority to do practically everything we want. Generate new ideas, create new procurement strategies. We're spending nearly a trillion dollars on defense—there's enough money.
The real question is: what do we want to build? What direction from the state? What new programs to create to deploy the necessary systems in large quantities and put the industrial base on war footing?
In a sense, this is a frightening thought for many. It's easier to say: "There's some process preventing me, until I fix it—I can't do anything." It's scarier to realize: nothing's stopping me, I have everything necessary for breakthrough changes in the required timeframe. And what are you going to do with it?
My hope for the next three years of the administration: creating new ideas, new programs, obtaining the capabilities America needs. Not copying Ukrainian experience into the American military, but drawing lessons from Ukraine and designing programs appropriate for our strategy, our way of fighting, our threats. And mass-producing these systems as quickly as possible. This is quite achievable in a compressed timeframe with available money and authority. What's needed is ideas and a sense of urgency.
Aaron McLean: Let's talk about the battlefield. Ukraine is very different from the Western Pacific. But this, along with events around Israel, is a war we should be studying. What should we learn? And even more important—what shouldn't we learn as it applies to a Western Pacific scenario?
Ukraine's Lessons: What Applies and What Doesn't
Christian Brose: It would be interesting to hear your opinion. There's a tendency to focus on specific systems that are effective in Ukraine—FPV drones with fiber-optic control and other tactical innovations. Due to the tactical nature and land-based character of this conflict, these are exactly the systems that work.
I'm not sure this translates to a confrontation in the Western Pacific, where the theater is predominantly maritime, distances are huge, and China is a far more formidable adversary. I don't think it's enough to say: "We need to ramp up production of FPV drones and Shahed analogs—and we're in great shape."
The basic lessons are different: inexpensive expendable systems, mass production, an industrial base that flexibly adapts, embedding battlefield lessons into rapid iteration of development and deployment. That's the lesson we need to learn. Not the specific systems working for Ukraine, but the pace of learning, the scale of production, the ability to adapt to a battlefield that changes daily.
Here we still have a long way to go. We can produce FPV drones, we can produce Shaheds. But do we have an organization in government capable of learning and moving as fast as the Ukrainians? Is there an industrial base as flexible in deploying and iterating new capabilities?
When this war started, no one could predict which systems would prove effective. If the war continues another year or two, the systems we'll be talking about likely won't be the ones we are now. What's crucial for the US is institutional learning, industrial adaptation, and scale.
Aaron McLean: Interesting. I constantly wake up to news of some military operation in Israel or Ukraine that's astonishing. Sometimes these are bad surprises, but often—joyful ones. The supply chain attack with pagers against Hezbollah was remarkable—and more a testament to planning and human cunning, old-fashioned strategy, than anything particularly high-tech.
But there's also something else—the ability to strike with such precision and scale, tracking multiple targets. On the first night of the 12-day war, when they eliminated nuclear scientists and commanders in Iran—you see how munitions hit not here, on the building wall, but there, in the bed in the northwest corner of the room. You imagine what's behind this technically, and it's stunning.
You've thought a lot about the technical "innards" of all this—what the internet means in a military context, AI's role in processing big data. What do people need to understand about the nature of war today in terms of AI and technology's role? And what do we need to accelerate in the US?
AI and Air Defense: Automate or Lose
Christian Brose: Everything's about scale and speed. Even the Israeli strike on Iran's nuclear complex looked more like a well-organized fusion of intelligence and military operations for very specific application against point targets. This looks more like the familiar than the future.
But the Israelis' defense against incoming weapons, what the Ukrainians have to do every night—here you start to see where these AI technologies, machine learning, autonomy become applicable. When you need to scale not against a handful of objects or scientists, but against large formations, multiple targets daily, in real volume, simultaneously—both offensively and defensively.
In the US I'd focus on application in air defense. We've long been watching Chinese buildup of ballistic, cruise, and now hypersonic missiles. A huge arsenal for strikes on Guam, our bases in the first island chain, or even US territory. The volume of what will need to be handled—not one day, like when we supported Israel in defending against an Iranian strike with 200-300 missiles, but again and again. Not point defense, but zone defense—as the Ukrainians have to do.
This problem cannot be solved without such technologies. You'll have to automate to get out of this. We've partially learned this lesson with systems like Aegis—we automate the defensive kill chain because it protects human lives. But when the volume of threats—inexpensive strike means or the depth of Chinese munitions stockpiles—requires repelling hundreds and thousands of attacks again and again, we simply won't have enough people and capabilities to do this without technological support.
I think this will be integrated first in defensive applications, because this is technology in service of protecting human life. With offensive operations—automating the kill chain for hunting and strikes—democratic armies will justifiably be more cautious.
Our legacy air and missile defense architectures aren't suited for this—they were created in the late 1990s and early 2000s to defend against launches from rogue states like Iran or North Korea with very small numbers of missiles. This has long been exceeded by modernization and weapons buildup in Russia and China.
Aaron McLean: The distinction between defense and offense is interesting. We could dive into the ethical questions of AI and autonomy. But I have a different concern. I constantly see senior officers saying: "I'm very familiar with chat, use it a lot in daily work." The Pentagon, like all major institutions in America, is trying to integrate large language models into workflows, including planning.
What worries me is the same as with any institution integrating these tools: how reliable are they really? I'm concerned about the long-term impact on us as humans, on our ability to learn, to own our own work.
The air defense application you described—that's indisputable. Everyone agrees you can't do air defense in 2025 without these tools. But I don't know the data on how frequent LLM use makes planners and leaders better at their jobs. Faster in the short term—perhaps, but better in the medium and long term? Maybe I'm just a Luddite, but I'd like your opinion.
Technology—A Tool, Not a Panacea
Christian Brose: This is obvious, but technology is always a tool. It's never inherently good or bad. It will be used for good and for harm. In some cases it will make us better, in others—more complacent, lazy, or stupid.
We must deploy these technologies. Too much conversation in Washington still has a theoretical cast: "We need to solve all problems in principle before we start applying technologies." The only way to figure it out is to implement them and see how people use them, build trust. We'll see, as with many technologies in the past, that trust breaks down where they're less reliable. We'll see where technologies do things better than humans and can be relied upon. And we'll see many cases where the decisive role remains with humans.
Regarding using LLMs for queries, working with information and planning—I have two teenagers at home. It's striking how my high schooler uses ChatGPT as a private tutor in addition to school. And I think that's good—it's the only educational intervention that consistently improves outcomes: the ability to query and research information.
Provided the basic information isn't biased, isn't ideologically distorted, is complete. Much depends on data correctness. But the ability of planners and military operators to research information more thoughtfully—that, I'd say, makes them better than the old approach: "Here's what you need to do, don't ask questions, execute."
The possibility of equipping people with tools that make them more thoughtful and effective, allow pushing better ideas up the chain because they have greater access to information and better ability to process it—that's encouraging.
Another application—agentic AI: automated tools that replace humans in performing routine, repetitive tasks that computers handle better. Then people can focus on what only people can do and what we want only people to do: leadership, strategy, command, thinking.
This division of labor between human and machine can be both operationally useful and morally beneficial. People in the military who came to solve problems, command, lead—can spend more time doing what they want to do and what we need from them, and less on routine.
Aaron McLean: I hope you're right and your optimism is justified...
Christian Brose: Let me clarify: I'm probably wrong. I'll definitely be wrong about elements of this. We will be wrong. The lessons and results will differ greatly from expectations. The question—and I know this isn't your point—is whether we start down this path and deploy these technologies or not? I think we should. And in the process we'll learn a lot, go in directions we didn't foresee. And this must always be done thoughtfully, critically, self-aware—not assuming technology will be a panacea.
Aaron McLean: Let me try to formulate my somewhat contrarian concern more clearly. A general or senior planner today. Or your teenagers at home who studied for 15 years without AI and now use it for supplementation and learning. Or a senior officer who had a whole career and now has a powerful tool to check ideas and assumptions. Hard to say that's terrible.
But what about a world not so far away where today's teenagers have grown into tomorrow's planners and commanders—and have never written anything themselves? The whole world they know is a world of AI not just as a helpful mentor, but as an omnipotent agent for any task. The problem—you and I are old enough to know this—is that if you can't write something clearly, you haven't thought it through. You spend your whole life with machine assistance in the most basic cognitive tasks. I'm worried about the quality of talent in 20-30 years.
The Humanities Will Win
Christian Brose: And you're right to worry. I want to be clear: I wasn't saying this will be positive in all cases. In some it will be pernicious, paralyzing to thinking, the ability to think creatively and critically.
But this returns to the question of education—in society as a whole or in professional military education. What we will value in the future is what we've always valued historically. I'm proud to be a product of a liberal arts education, as are you. Ultimately the liberal arts win—these are the skills that will be valuable. The ability to think creatively when circumstances change. Imagine new possibilities. Build connections across disciplines.
I think AI will be better than humans at many specific tasks—we're already seeing this. Five to ten years ago everyone said: get a programming degree, learn to code, you'll own the future. Turned out—no. And this will continue: areas of specific work that these technologies will master as well as or better than humans.
This makes education paramount that allows people to think creatively and critically across disciplines, build connections, put themselves in a position of owning these tools rather than allowing tools to own them and turn them into a reflection of what AI says.
If the latter happens and your fears are realized—we've failed in how we educate people. But if we evolve and think about education more broadly—the risk remains, I acknowledge—but it's more likely we can use these technologies so people do their jobs better. And fundamentally do what education should do: prepare people for a rapidly changing world creatively and critically, adapt to circumstances none of us can foresee—rather than teach a specific task assuming it will remain relevant forever.
Aaron McLean: I hope somewhere there's a planning cell trying to understand how to disrupt the PLA's work through their large language models—because I feel like on the other side this is already happening.
Christian Brose: Yes.
Aaron McLean: Let's talk about Anduril. There have been many headlines this year. Tell us about Arsenal-1—the facility you're building that embodies what you've been talking about: mass production of potentially expendable items. What is Arsenal-1? Paint a picture for our listeners—what would they see in this part of Ohio?
Arsenal-1: The Factory of the Future
Christian Brose: Arsenal-1 will ultimately have 4 to 5 million square feet of manufacturing capacity outside Columbus, Ohio. Not literally under one roof, but under a series of roofs of very large facilities located on one campus.
Why did we decide this? Studying the process of scaling up mass production—which we've been doing for the last few years—we understood: the politically easy path is what was often done before: divide programs and supply chains and distribute across the maximum number of states and districts, because this creates political buy-in. There's logic to that.
But Arsenal-1's logic is different: enormous benefit from co-locating programs of similar type. Producing weapons and autonomous systems of different classes in one place—this goes back to our conversation. If the future brings radical changes in products, the need to launch new products when technologies or the threat environment change—a production platform under one roof allows quickly redistributing people, materials, machines, capital for unforeseen tasks.
Plus the scale of production we've always aimed for: a tenfold increase in volume compared to traditional defense manufacturing. Arsenal should produce tens of thousands of military systems per year—and this will grow quickly.
This is our money invested plus significant support from the state of Ohio. Federal dollars aren't coming in. The Department of Defense isn't funding Arsenal-1's creation the way it funded many legacy programs for traditional defense companies. We're creating this ourselves—ultimately about a billion dollars in investment, supplemented by Ohio's support. About 4,000 jobs will be created.
We've shown that you can very quickly create a new industrial base for producing what we're going to build: inexpensive weapons, autonomous fighters, other advanced aircraft. The required workforce—not the highly specialized and scarce kind needed for the most complex programs like Virginia-class submarines or B-21 bombers.
This is more a "back to the future" scenario—a workforce that can be broadly recruited from commercial automotive or aerospace. This is how we created the "Arsenal of Democracy" in World War II—with women who came to factories in place of men sent to the front, and people with minimal retraining. We built military systems that the industrial base could produce—not the other way around: defining requirements for what we'd like to have but the industrial base can't produce in large volumes.
Arsenal-1 will be a software-defined production platform capable of building everything—from small autonomous aircraft to robotic fighters, different weapons systems: cruise missiles, rocket-powered missiles. Everything except energetics components—rocket motors and warheads we make elsewhere, but the actual system production and final integration—everything at Arsenal.
Aaron McLean: A question based on not very verified stereotypes—correct me if I'm wrong. In terms of business strategy and culture: what are the challenges and advantages for a Silicon Valley company with roots in technology that's becoming a major manufacturer? What needs to be overcome? And what advantages does Anduril bring that aren't usually there?
Silicon Valley Meets Defense
Christian Brose: A couple answers. If you look at Silicon Valley companies transitioning to mass production—many are already doing this in terms of hyper-scale production of consumer electronics or commercial automotive. We're trying to transfer the lessons they've learned and created into defense: automate what needs automating, insert people where people are needed, increase production volumes the way software companies from Silicon Valley have proven effective in disrupting other sectors of the economy.
The problem with defense is that in many areas it's fundamentally different from the commercial economy. There's a huge amount of lessons from commercial technologies and manufacturing—mostly related to "Design for Manufacturing." This means designing military systems that are simple to build, cheap to build, easy to assemble by the broadest workforce. Because what matters isn't just lethality and combat capability, but also manufacturability and scalability. In our history we relied on this much more, but over the last generation we've forgotten that this also matters.
Where commercial technologies help design systems more simply and scale production on lessons from the commercial economy—that's a benefit. The problem is in areas critical to defense but with no analogy in the commercial world, where you can't eliminate certain things: energetic materials, precursors, germanium.
You can limit dependence—and that's our strategy: be less dependent on critical minerals in areas where our supply chains aren't as resilient and where China has an advantage. But you can't eliminate them entirely. You can't eliminate energetic materials from defense systems—whether rocket motors or warheads.
Where Anduril differs: we're making significant investments in building capacity in these areas. A few years ago we acquired a solid rocket motor production company. We're pouring investment into this part of the industrial base to produce more motors—not just for ourselves but for other defense companies. We'll do the same in other strategic areas that are supply chain limiting factors: seekers for weapons and other critical technologies that haven't allowed us to scale weapons production as a country.
We won't wait for the government to write a check. We'll move with our own resources to create these capacities for ourselves and others. The combination of these two approaches distinguishes us and positions us well to execute significant contracts we've won and are winning, as well as more ambitious things in the future.
Aaron McLean: We have a couple minutes left, but I don't want to let you go without asking about Collaborative Combat Aircraft (CCA). Another area where Anduril is very active, with a major contract. Explain in language understandable to an infantryman with a liberal arts education: how will this work? As I understand it, there's a manned aircraft escorted by unmanned ones performing other tasks. What's the division of labor? Is the manned aircraft now just a control node dispatching these craft to perform tasks? That's the first question.
Second: do you see this as an intermediate stage of air combat? And how long will it last before manned systems become unnecessary?
Combat Aircraft with AI Wingmen
Christian Brose: Both great questions. The honest answer to the first: we don't know yet. "We" meaning the US as a whole. This will be clarified as we develop, produce, deploy, and integrate these systems into joint forces.
The CCA that Anduril is building for the US Air Force is a high-speed tactical aircraft, more analogous to an F-15 or F-16. It will fly like a fighter, have a similar profile, carry weapons and sensors. But there will be many different CCA variants with different characteristics and missions.
From an infantryman's perspective: ideally these will be teammates. Not equals, but subordinates. Augmentation of the human pilot or operator. They bring additional sensors into the battlespace that wouldn't otherwise get there—extending the human's horizon and capabilities. They carry additional weapons, increasing range and firepower—both to protect the pilot's life and enhance mission effectiveness.
In an ideal world, this is how you achieve the scale we've been talking about. While people are needed: to command, manage, bring into combat the capabilities that only they have aboard the system they control.
This combination of expensive and cheap is an important principle for reimagining joint forces. Not everything will be high-tech systems, and not everything will be cheap expendable robots. It's a mix: a small number of exquisite manned platforms and combat systems plus large volume of cheaper weapons and autonomous platforms increasing combat capabilities under human command.
On the second question: yes, if over time we're not taking more people out of danger zones and replacing them with more capable technological systems—we're experiencing technological failure. The challenge is that as we advance robotic systems and intelligent weapons into the battlespace—how do we, as a democratic society valuing law, rights, principles and treaties—ensure the appropriate degree of human control over these systems' employment?
I believe this is possible. This doesn't have to be an intractable problem limiting technological progress. But, as at the beginning of our conversation, this is something we'll only figure out through doing. We need to start, go through these processes, learn, adapt, evolve.
If we do this correctly and thoughtfully, consistent with our values—the country will be better and far more combat-capable. And this is the only way to create the mass and lethality we need to restore deterrence and ensure a peaceful future for decades ahead.
Aaron McLean: Christian Brose, president and chief strategy officer at Anduril. I learned a great deal during this hour and am grateful you found time for School of War.
Christian Brose: Thank you very much. Really appreciate it.