From concept exploration to autonomous combat, artificial intelligence (AI) is being introduced into new areas of aerospace and defense. Machine learning has already enabled rapid strides in data analytics, but the next steps could have even greater impact, from how aircraft are designed and manufactured to how they are crewed and flown.
“It’s already impacting every dimension of our business, and that impact will continue to grow,” says Boeing CEO Dennis Muilenburg. Experts within the company’s AnalytX organization, formed in 2017, are already applying AI to supply chain and manufacturing system management and engineering toolsets. “We also see it working into our product lines themselves, into the systems on our airplanes,” he says. 
But there are barriers to be overcome before AI can take flight. “To get something safety-certified, you have to be able to predetermine what the machine will do in a scenario, and AI isn’t deterministic in that regard,” says Collins Aerospace CEO Kelly Ortberg. “We are going to have to continue to work that boundary of how does inherently nondeterministic AI apply in a deterministic certification world.”
Airbus, Boeing, Lockheed Martin and others are experimenting with onboard AI, but the research is in its infancy. “I don’t think you’re going to see AI flying airplanes independently in the near future,” says Ortberg. “I think it may become a supplemental tool, but there still has to be an overarching deterministic system that determines, under failed conditions, what the airplane does.”
For now, AI in aerospace means statistical learning. “I’m not a believer that artificial intelligence really exists right now,” says Raytheon Chief Technology Officer Mark Russell. “I would say there will come a day when you can do more with it than just ingest large amounts of data and sort things out and help make decisions. There will come a day when machine learning actually gets to the right fidelity and is more deterministic.”
It is not only the nondeterministic nature of AI that poses a problem for aerospace. Today’s machine-learning systems are immensely powerful at statistical pattern recognition when trained to sift through enormous amounts of data, but they cannot explain their decisions. Without an explanation to back a prediction, users cannot build the trust needed for AI to secure a place alongside the human.

“There are a lot of different approaches to AI, and most of them are ‘black box’ today. You train a multilayered convolutional neural network, but then you have no idea what that neural network will or will not do in the future,” says Paul Eremenko, United Technologies Corp. chief technology officer.
“You can get statistics, but on today’s certification basis, it’s not explainable. You cannot tell why it’s doing what it’s doing,” he says. “So alternative approaches to AI that are explainable, and therefore certifiable, and that also provide much better human-machine collaborative capability, are the key for the longer term.”
Because of the complexity inherent in machine learning, explainability will be key to customers accepting systems. “You can’t go through the code and say, ‘I can validate this,’” says Russell. “There’s a whole ‘How do we test?’ thing. Right now, when you’re done with so many neural networks, and there are so many things going on that you can’t debug, you can’t really know what happened. At some point, we’re going to have to come up with a way where we can actually understand what happened.”
To that end, DARPA has launched its AI Next campaign with more than $2 billion in funding over five years. Developing explainable AI is a key goal, along with creating systems that can learn from experience while operating in the real world, reducing the number of manually labeled examples required to train a neural network and defending against misclassification attacks on networks.
DARPA has a long involvement with artificial intelligence, beginning in the 1980s with the first wave of AI involving expert systems. These encoded the knowledge of subject-matter specialists in the form of handcrafted rules. An example was the Pilot’s Associate program to develop a decision-support system to help the pilots of single-seat fighters.
“Unfortunately, for every rule there’s an exception,” says DARPA Director Steve Walker. “Expert systems are brittle when confronted with situations that don’t conform to their rules, and adding a new rule to account for every exception quickly becomes intractable.”
The second wave of AI focused on building neural networks, inspired by the human brain, and training them with large numbers of labeled examples. “Starting in 2010, sufficiently powerful computer hardware became available to make these approaches work surprisingly well,” Walker says.
“Like first-wave expert systems, however, second-wave systems have shortcomings. Adding imperceptible amounts of noise to a picture can cause a trained neural network to wildly misclassify it,” he says. “So far we only have point solutions to such adversarial image attacks, as the practice of machine learning has run well ahead of the theory.”

DARPA’s Explainable AI (XAI) program is developing computational architectures that enable neural networks to explain themselves. “The knowledge of neural networks is contained in millions of link-weighting factors, making these systems incapable of explaining their decisions,” says Walker. “[Explainable AI] will help human operators develop appropriate levels of trust in their systems.”
The inability of machine-learning systems to provide the justification for a specific prediction “can leave the user frustrated, particularly if they’re responsible for critical applications,” says David Aha, XAI program manager. XAI is creating machine-learning processes that output an explainable model, with an interface that allows the user to interrogate that model, and know when to trust the system.
XAI is focused on two types of application: data analytics and autonomous control. An example of the first is an intelligence analyst working with a machine-learning system that is responsible for looking at imagery, identifying certain objects and activities and recommending how to respond to what it sees.
The research is showing that hidden biases within machine-learning algorithms can lead to misleading predictions such as identifying a shopping mall as a solar farm because the network is paying more attention than the user realizes to features such as a parking lot, which complicates the prediction.
“In terms of autonomy, we have operators who are interested in why certain actions are being taken by an autonomous vehicle,” Aha says. The vehicle might be out of sight, with access to information the operator cannot see, so they want to be able to interrogate the model to understand the behavior.
XAI aims to increase explainability without sacrificing learning performance. When working with sensor data, deep-learning models have been shown to greatly increase performance, “but often it has been at the sacrifice of explainability,” he says. “The goal is to create AI systems that are machine-learning enabled, in which users can understand the learned model, why the predictions are being generated, when they can trust the model and work with it effectively.”
In another XAI project, involving self-driving cars, vehicle control commands generate text explanations of the model’s actions. “What they’ve found is that, given explanations, the humans are doing much better. There’s also evidence [the explanations] have engendered appropriate trust in the system,” Aha says. “One drawback is that, if the system provides an incorrect explanation, it can be very damaging.”
DARPA is pushing AI into new areas. One is software production, where not only is the amount of code in systems increasing, but also the portion of critical functionality realized in software. “A parallel and not so exciting trend is that the defects or vulnerabilities on software are also increasing at the same time, says DARPA Program Manager Sandeep Neema.
Tools and methods for software production and quality assurance are not scaling up with the amount of code needed, he says. And software engineers are unable to effectively utilize the large codebase that exists today to understand the source of bugs and make sure they are not repeated.
One answer, says Neema, is to treat software programs as data for machine learning. DARPA is developing new capabilities in code mining, bug detection and program synthesis. The goal is to enable engineers to easily search existing databases for usable code, apply learning-based approaches to anomaly detection, and generate program artifacts with minimal specifications.
DARPA is also applying AI to design. “I’m focused on the earlier stage of design, because that is still very artisan, and the question I’m asking is: How can AI help us explore all the different possibilities that are actually available?” says Program Manager Jan Vandenbrande. “There are so many options available; can we have AI explore all of these combinations to find really new, novel things?”
Researchers have trained a deep neural network with observational data on flow around a cylinder and used it to generate equations governing physical behavior. The result closely corresponds with the Navier-Stokes equations that describe viscous-fluid motion. “So can we train a neural network to discover other physical laws we haven’t thought about?” Vandenbrande asks. “What this may create is Newton in a box, where you give it the observational data on the proverbial apple falling and the outcome is F=ma.”
Under another program, researchers combined reinforcement learning with a physics engine used in gaming machines to find new ways of gliding. “What they discovered was a new form of flying, where the airplane leverages different boundary layers to extract energy and increase range,” he says. “No self-respecting pilot would ever fly this way, but certain birds do; so it’s showing that we are discovering new things we hadn’t thought about.”
Another DARPA AI program is leveraging topology optimization—placing and removing material based on underlying physics to minimize weight—to balance shape with materials. “The problem with topology optimization is that, underneath it, you are solving some difficult nonlinear equations, and you have to set a bunch of knobs and give it an initial guess,” says Vandenbrande.
“Humans are not very good at guessing things. And if you don’t set things right, it takes forever to find a solution,” he says. Instead, researchers are training neural networks. “Now the machine learning can set those knobs and give you the initial guess so you can get to a solution several orders of magnitude faster. We’re enabling the synthesis of new designs significantly quicker.”
AI is also being applied to new manufacturing processes. When additively manufacturing a part, metal stays hot longer at the bottom than the top, giving the alloy grains there more time to grow, so the material properties at the bottom differ from those at the top. “Can we use AI to understand how you set the parameters of the machine to compensate for this in design or leverage it?” asks Aha.
Boeing has applied this to electron beam additive manufacturing. “This is similar to welding, and we have a lot of data and empirical formulas to describe the relationship between how you weld and the material properties,” he says. A neural network was used to collect and merge the known equations and process data from the machine to derive the material properties based on the process parameters. “The formula connects the properties of the machine, the process and the material to predict what the yield strength would be based on all the knobs at your disposal,” he says.
Lockheed Martin tackled a different problem: how to trust the bond between two pieces of composite material. “Right now we don’t know, and so we don’t trust the bond,” says Aha. “As a consequence, we drill holes and punch rivets into it, which means cost and assembly time go up, and we’re introducing defects. If we can get rid of that, we could save weight, time and cost.”
There are no reliable mathematical models to predict bond strength, he says, so Lockheed performed an exhaustive study into the influence of different parameters including temperature, humidity and time out of storage for the raw material. “They created a huge decision tree, which goes back to the early days of machine learning, and now have a path to follow to get a reliable bond,” Aha says.
“If you end up somewhere in the middle of this tree, you now know whether you should proceed and what kind of mitigating steps you can take to improve the quality of the bond,” he says. “In some cases, it means you have to do a specific kind of surface preparation. Or it simply says, ‘You need to reject this part because you’ll never get there.’ The tree gives you that kind of insight.”
AI is still a long way from replacing the human designer, says Aha. “To do design you have to know how the world works. As humans, we don’t go all the way back to first principles. We have all these shortcuts in our mind, and we’ve learned this over years,” he says. “The question is, how can we discover all these shortcuts using some kind of AI?”
Aha sees the future of AI-enabled design as a partnership where the human’s responsibility is problem formulation—“this is what I am looking for; these are the constraints”—then the AI searches the design space and comes back with, “here are some ideas you should explore.” “I believe this becomes a dialog between the computer and the human, and the human gets insight from the AI and changes the problem formulation,” he says.
When it comes to applying AI to autonomy in combat, interaction with the human becomes a critical issue. 
DARPA’s Alias program is developing high levels of automation that can be added to aircraft to reduce the onboard crew. The system, developed by Sikorsky, is being tested in an optionally piloted UH-60 Black Hawk that can be flown with two, one or zero pilots.
“With two pilots on board, in the Black Hawk case, the system operates in the background much like lane assist in a vehicle,” says program manager Lt. Col. Philip Root. The second-most straightforward is zero pilots, he says, where the aircraft understands it is responsible for all actions and there is no need to communicate with a pilot.
“The most challenging is when you have a single pilot. Now you have removed a human pilot and replaced it with an autonomous co-pilot, and that interaction is not well understood,” Root says. But even more challenging, he argues, could be “less than one pilot”—situations where the pilot is incapacitated, but not unconscious, particularly during training.
“We believe that Alias has a real opportunity to assist here, but it’s incredibly challenging because you have a pilot who perhaps is unaware of their incapacitation. And the last thing we want to do is enrage the pilot with actions he or she may not feel are warranted at the time. So how do we find a new teaming arrangement in an environment nobody wants to find themselves in?”
In its current form, Alias does not incorporate AI as the system is designed to be certifiable and the FAA has no way of determining the airworthiness of nondeterministic systems with learning behavior. But AI is taking to the air. DARPA has launched the Air Combat Evolution (ACE) program to automate air-to-air dogfighting, enabling reaction times at machine speeds and freeing pilots to manage the air battle.
Describing air-combat training as a crucible where pilot performance and trust are highly refined, DARPA says ACE will use human-machine collaborative dogfighting as a challenge scenario to increase pilot trust in autonomous combat technology. “Being able to trust autonomy is critical as we move toward a future of warfare involving manned platforms fighting alongside unmanned systems,” says program manager Lt. Col. Dan Javorsek.
Under the four-year, three-phase program, combat autonomy algorithms and human-machine interfaces will be developed and tested over a series of increasingly complex exercises involving first subscale then full-scale aircraft in 1v1 and 2v2 air combat. “We envision a future in which AI handles the split-second maneuvering during within-visual-range dogfights, keeping pilots safer . . . as they orchestrate large numbers of unmanned systems,” says Javorsek.
By training AI in the rules of aerial dogfighting similar to how fighter pilots are taught, ACE may have a key role to play in accelerating the movement of machine learning from the data center into the aircraft cockpits of the future.