When I wrote about Anduril in 2018, the company explicitly said it would not build lethal weapons. Now you build fighter planes, underwater drones and other deadly weapons of war. Why did you make that turn?
We responded to what we saw, not just within our military, but around the world. We want to be aligned with delivering the best capabilities in the most ethical way. The alternative is that someone is going to do that anyway, and we think we can do that best.
Were there any in-depth discussions before you crossed that line?
There is constant internal discussion about what we should build and whether there is an ethical alignment with our mission. I don't think there is much point in trying to determine our own line, while the government actually sets that line. They have given clear guidelines on what the military will do. We follow the example of our democratically elected government in telling us their problems and how we can help.
What is the right role for autonomous AI in warfare?
Fortunately, the US Department of Defense has done more work on this than any other organization in the world, with the exception of the large generative AI foundational model companies. There are clear rules of engagement that keep people informed. You want to get people out of the boring, dirty and dangerous work and make decision-making more efficient, while ultimately always holding the person accountable. That is the aim of all policies that have been introduced, regardless of developments in autonomy over the next five or ten years.
In a conflict, the temptation can be great not to wait for people to intervene, when targets present themselves in the blink of an eye, especially with weapons like your autonomous fighter planes.
The autonomous program we're working on for the Fury aircraft [a fighter used by the US Navy and Marine Corps] is called CCA, Collaborative Combat Aircraft. There's a guy in a plane who controls and controls robot fighter planes and decides what they do.
What about the drones you build that hover in the air until they see a target and then pounce?
There is a classification of drones called loitering ammunition. These are aircraft that search for targets and then have the ability to move towards those targets kinetically, a kind of kamikaze. Again, you have a person in the circle who is responsible.
War is messy. Is there not a genuine concern that these principles will be cast aside once hostilities begin?
People wage wars, and people have flaws. We make mistakes. Even when we were standing in line and shooting each other with muskets, there was a process for adjudicating violations of the law of engagement. I think that will continue. Do I think there will never be a case where an autonomous system is asked to do something that feels like a gross violation of ethical principles? Of course not, because they are still people in charge. Do I believe it is more ethical to prosecute a dangerous, messy conflict with robots that are more accurate, more discriminatory, and less likely to escalate? Yes. To decide not If you do this, you will continue to endanger people.
I'm sure you're familiar with Eisenhower's final message about the dangers of a self-serving military-industrial complex. Does that warning affect the way you work?
That's one of the greatest speeches of all time; I read it at least once a year. Eisenhower formulated a military-industrial complex in which the government is not so different from contractors such as Lockheed Martin, Boeing, Northrop Grumman and General Dynamics. There is a revolving door at the higher levels of these companies, and because of that interconnectedness they become centers of power. Anduril has promoted a more commercial approach that doesn't rely on that closely linked incentive structure. We say, βLet's build things at the lowest cost, using off-the-shelf technologies, and do it in a way that takes on a lot of the risk.β That avoids some of the potential tension that Eisenhower identified.