In this episode of the McKinsey Global Institute’s Forward Thinking podcast, co-host Michael Chui speaks with Mary “Missy” Cummings, one of the first female fighter pilots in the US Navy and now a professor in the Duke University Pratt School of Engineering and the Duke Institute for Brain Sciences, as well as the director of Duke’s Humans and Autonomy Laboratory.
Cummings talks about her life as a fighter pilot and her journey into automation and robotics. She also answers questions like:
- What are your reflections on diversity across different fields?
- What are some interesting developments you’re seeing in the automation of vehicles?
- Are there things that car designers should be learning from the aerospace industry, or vice versa, as they’re starting to implement more levels of automated technology and driver assistance?
- What is the perfect use case for automation?
- What excites you most about advances in technology?
Michael Chui (co-host): Hi, and welcome to Forward Thinking. I’m Michael Chui. Well, the first order of business is to say welcome, Janet. Janet is an MGI senior editor and she is joining me to become co-host of our Forward Thinking podcast.
Janet Bush (co-host): Thanks very much, Michael. It’s a wonderful chance to talk to some amazing people. Speaking of which, today’s interviewee is extraordinary. Why don’t you tell us a bit about her?
Michael Chui: Today’s interview is with Missy Cummings. She was one of the US Navy’s first-ever female fighter pilots. Now she is a leading academic in the fields of engineering, automation, and robotics. In this podcast, she shares her thoughts, among other things, on automation in airplanes and cars.
Janet Bush: Fascinating and very much an area that MGI has been researching. And Missy is also a trailblazer for women in two areas dominated by men: the military and, as she says in the podcast, technology. Can’t wait to hear what she says. Over to you, Michael, for this one.
Michael Chui: Missy Cummings is a professor in the Duke University Pratt School of Engineering and the Duke Institute for Brain Sciences, and is the director of the Humans and Autonomy Laboratory. Missy Cummings, welcome to the podcast.
Missy Cummings: Thank you for having me.
Michael Chui: First of all, I’d love to hear a little bit about how you got to where you are today.
Missy Cummings: I was a military brat, which means that I moved around a lot as a kid. I spent the later part of my childhood, high school, in Memphis, Tennessee. I sort of consider that home, but my parents are no longer alive. North Carolina is my home right now. I’ve been here for the last eight years.
My dad was in the Navy. I don’t think it’s that big of a surprise that I went into the Navy. I went to the US Naval Academy for college. And then after I graduated, I went to flight school. The year was 1988. Top Gun had come out in 1986. It’s not too surprising that I was very motivated to become one of the best of the best. And I flew A-4s and F-18s for about ten years with the military. I also went to graduate school in the middle of that time frame, and got my PhD in space systems engineering.
After I had flown fighter jets as one of the first female fighter pilots for three years, it was—I’m really glad I did it, but it was a really rough ride. It was a lot of difficulties, making that cultural transition. So I decided to get out and go back to school to get my PhD. I did that at the University of Virginia. And MIT found out that I was close to being done with my PhD, and they were very excited to get a female faculty member who was an expert in aviation. I was there for ten years. And then Duke made me a really good offer I couldn’t turn down to move my lab down south, which I did. And that’s where I am now.
Michael Chui: Well, congratulations on that. Clearly a pioneer in a number of different areas. Any reflection on diversity now being discussed in a lot of different fields?
Missy Cummings: It’s funny to me because being one of the first female fighter pilots, I saw all the problems that come around with being one of a minority that’s trying to break into a majority. And I think that the military has improved. What’s kind of concerning to me—and I have written an op-ed for CNN about this—is how much Silicon Valley today, in the year 2021, still resembles the fighter pilot mafia in 1993, ’94, meaning that it’s still a good ol’ boys’ club. There’s still a lot of fraternity-like behavior and just outward discrimination against women.
And this is one of the reasons that we see women in tech, particularly in Silicon Valley—you know, it’s a rough ride for them. It’s a rough ride for them now, like it was a rough ride for me 20 years ago. So I do worry that we’re not making as much progress as I had hoped that we would.
Michael Chui: What can be done?
Missy Cummings: There’s a magic percentage of 15 percent. It’s called critical mass. If you can get your minority to 15 percent, they start to make major inroads in the culture of the majority. So female fighter pilots are still in the single percentages in the year 2021.
To actually fundamentally move that needle, you need to get that up. Those percentages are probably very reflective of what you might see in a lot of tech cultures in Silicon Valley as well. We need to get more women in. And we need to get them at all levels. Because if you don’t have women at senior levels being able to provide some high cover for the junior women, then it becomes very difficult.
Michael Chui: What’s magical about 15?
Missy Cummings: Well, I’m not a social scientist who came up with that. And I’m not sure how hard the science is that backs that number up. But certainly if, anecdotally, I look around at all the places that I’ve been in where there were small and/or larger numbers of women, you do see that the more women that are in a unit, whether that unit is a squadron or an academic department, the more natural it is to see women around.
And women attract other women. So I always have a lot more women in my lab than most men do. The importance of role models cannot be overstated here. I think that’s [also] true for people of color. The more that people see people like themselves in positions of leadership, then it’s inspiring and people realize that they can achieve the positions that they want.
Michael Chui: That’s a remarkable insight, and amazing that you’re able to continue contributing in that area, with regard to diversity. Talk more about what you’re researching now. You’re running the Humans and Autonomy Laboratory. What goes on there? By the way, it has the interesting acronym “HAL.”
Missy Cummings: I chose that on purpose, because if you’re a [2001: A] Space Odyssey buff you will know that HAL is the computer that was warring with the humans, trying to keep them under control. And I’ll let you see the movie to see what that’s all about.
By the way, I think we’re due for a remake of that movie. So yes, there is a joke in the name. We are really here to build bridges between humans and autonomy. I consider myself the human advocate. All the research that I do is looking at how humans are, or are not, integrated correctly into complex systems with either straight-up automation or varying levels of autonomy and artificial intelligence.
I read the tea leaves correctly many years ago that humans inside these systems would become more difficult to plan for, design around, test, certify. And we see that today in spades, with self-driving and potentially driver-assist cars. Humans have become an issue in terms of both designing the technology and as users of the technology.
Michael Chui [laughs]: It is kind of funny to view that it’s the humans that are the issue, right? But I hear what you’re saying. I am curious—maybe we just talk a little bit about the history of autonomy with regard to vehicles, et cetera. Take me back. You mentioned that you read the tea leaves. Where did you see things starting to move along this dimension that you found interesting?
Missy Cummings: When I say I read the tea leaves, in the ’90s, when I was in the military, I saw the rise of GPS. I saw the planes, the automation in the planes that I was flying. And it was clear to me that something was changing, in terms of more and more automation was taking over parts of jobs for humans. And I also saw a lot of deaths of humans as a result.
In the time that I flew fighters (about three years), on average one person died a month every year I was there. And they were all training accidents. Nothing was in wartime. And it was a stark reality that we had started to design aircraft that far exceeded the capabilities, the cognitive capabilities, of humans. So that’s what motivated me to get into this field and say, “We can do this better. There’s got to be a way to take a principled approach to designing technology to consider the strengths and limitations of the humans.” And that is my area in a nutshell.
Michael Chui: Say more about what you found, or the conclusion that you came to, that these tragic losses of life had to do with exceeding the human cognitive capability in the cockpit or what was happening.
Missy Cummings: This particular day, it was late in the bounce pattern, and they decided that the wives and girlfriends could come and watch. Mostly the men do the bounce pattern. And on this one particular day, one of my peers did a touch-and-go, everything looked good, and he got the infamous “deedle deedle.” It was a sound that was made in the cockpit. And one of the flight control axes had thrown an error. So we were trained, pretty much like monkeys, to hit the reset button, very much like a reset button turn, cycling the power on your computer, except that it happens really fast.
And in this particular case the system rebooted, the software rebooted, but the hardware was not connected to the software correctly. And it caused the rudder to actually be out of limits, even though the software set the center point of the rudder back to zero. The aircraft flipped and killed the pilot. He had no chance for escape—and right in front of his fiancée.
And this problem of—you know, the pilot did not understand that there had been a software-hardware error. There actually was a place that he could’ve figured it out, but with a menu that was buried several layers deep inside the system. This is one of the problems with a single pilot. He didn’t have time to troubleshoot the system. And in the end, that accident report blamed the pilot. The accident report said, “Pilot error. It was all his fault.”
I think that we need to move away from that. In the end, maybe he could’ve figured it out, but he was set up to fail. And we need to be much more careful in how we’re designing these safety-critical systems with automation so we’re not setting people up like this.
Michael Chui: Are there things that car designers should be learning as they’re starting to implement more and more levels of automated technology and driver assistance that the aerospace industry has learned, or vice versa? What’s your assessment of the possible cross-fertilization there?
Missy Cummings: One of the big areas that researchers learned about as a consequence of automation in aviation is a problem known as “mode confusion.” This is when the aircraft was in one automated mode, but pilots thought it was in a different mode and actually had different and oftentimes very incorrect actions, which sometimes had catastrophic consequences.
Indeed, that was the case for my peer who flipped his airplane doing touch-and-goes. He didn’t realize that the aircraft was actually in a fail mode. He thought the aircraft was in a safe mode and he didn’t need to do anything different with the aircraft.
We’ve known about this for a long time in aviation. But this is new learning for the automotive world. And we are seeing this problem, mode confusion, quite a bit. We see this, for example, when people think that the words “autopilot” and “full self-driving” actually mean those things. And people climb in the back seat or take their hands off the steering wheel just for a little bit, and then they don’t realize the trouble that they’re in, and then the car crashes.
People will often think that the car’s doing a great job tracking between the lanes and doing navigate on autopilot. And then they’ll say, “Well, I can just, ooh, oops, I dropped my phone” or “I dropped my french fry” or whatever that is I was doing in the car. Unfortunately, in cars really bad things can happen in a tenth of a second, especially in heavy, dense traffic. And so people are getting lulled into a false sense of security about how well the car’s performing.
I think that the automotive community is just now starting to get it. Now whether they’re going to be able to do anything to mitigate this, I think that’s another big question.
Michael Chui: It is interesting that planes fly so fast, but there’s a lot of room up in the air, I guess. I think you once mentioned how much time in a commercial flight does a pilot actually spend manipulating the flight controls and on the stick. It’s a surprisingly short amount of time.
Missy Cummings: There’s so much automation in commercial aircraft now. It is surprising to people. [It] really depends on whether your pilot is flying an Airbus or a Boeing. If the plane is an Airbus, most pilots touch the stick only for about three and a half minutes out of any flight. And if it’s a Boeing, then the pilots touch the stick for about seven minutes.
This happens because the automation is so good at manipulating the throttles that if we let humans do it, planes waste a lot more gas and create a much bigger carbon footprint when humans fly them. Humans are in planes now really as babysitters. And that introduces a whole other set of issues with people being bored.
Michael Chui: That’s a different failure mode than mode confusion.
Missy Cummings: That’s correct. That’s correct. It’s not a failure mode, but it exacerbates failure modes.
Michael Chui: We’ve done quite a bit of research on automation at MGI. From your point of view, what is the perfect use case?
Missy Cummings: In the military, in the Department of Defense, they like to use the mantra “dull, dirty, dangerous.” It’s a pretty good mantra. The tasks that are boring, the tasks that are “dirty,” meaning maybe some kind of chem bio, where human health may be at risk for doing something. Mining, for example, is a good reason that we have robots. Nobody needs all that coal dust in their lungs.
And then of course “dangerous”—wartime. We do see far less, not just military casualties when we use drones for warfare, but we do see fewer civilian casualties. And the reason is because when you send a drone in to drop a bomb as opposed to a human, the human is no longer at physical risk, the human that’s controlling the drone from 2,000 miles or 4,000 miles away. The human can take their time. They have a lot of people they can talk to on the radio. They are not physically under stress. And they’ve had plenty of sleep. You would be surprised at the number of drone operators that have a Starbucks cup of coffee sitting right next to them in their drone trailer.
There are all these practical considerations. And I think that we need to appreciate that the sweet spot is a collaboration between humans and machines. Let the humans do, let the humans augment, or let the machines augment the humans so that robots can extend our capabilities and let humans do what we do best, which is problem-solving.
Michael Chui: So just to push the question, do we need human pilots in commercial passenger aviation?
Missy Cummings: Passenger aviation is a little bit trickier, not because of the science of flying. But there is this issue that we know as “shared fate.” Shared fate is the comfort that passengers get by knowing that there’s a human in the cockpit and that if anything goes really wrong that the human is in the cockpit doing everything he or she can to save their own life, and thus saving the lives of the passengers.
I think there’s some social control issues that we have to think about as well. I don’t see us getting away from that anytime soon. I think for passenger airlines, we’re going to need at least one person who’s the captain for my lifetime.
That is not true for packages. I think right now we should turn all package [delivery services]—FedEx, UPS, DHL—I think these could and should become drone aircraft.
Michael Chui: Let’s turn to automobiles. We talked about it a little bit already, but clearly it’s one of those areas where there’s a lot of interest and also a place where there at least has been a lot of investment in the past five, ten years. I’m curious about the history.
There was that DARPA Grand Challenge back in 2004—DARPA, the Defense Advanced Research Projects Agency, had a contest. And a number of people could sign up to win a prize, to drive this autonomous vehicle 150 miles in the desert. Nobody finished. And then the following year, in 2005, I think there were five finishers. And there’s a winning car, I guess from Stanford, if I recall correctly. You were in the field at the time. I think prior to that people thought, “It’s impossible.” In 2004 they said, “Look, this definitely didn’t work.” And by 2005 it did kind of work. What were your thoughts as you saw this playing out, coming from the field that you did?
Missy Cummings: At the time all this was happening, I was working with the MIT DARPA Grand Challenge team on a parallel project doing robotic forklifts. I saw what was happening at the ground level. I think everyone was blown away with how quickly this technology transitioned out of the academic sphere and into the commercial sphere. It literally did happen overnight.
But I think what we’re seeing now are the consequences of that. Because the technology was still extremely immature, still very much experimental. And Silicon Valley decided to try to commercialize a technology that was still very much in its infancy. I think that this is why you’ve seen so many self-driving cars, Waymo, Cruise. They’re still really struggling to try to get the technology to a point of maturation that’s safe enough in commercial settings, in robo-taxi settings.
We’re not going to get this anytime soon. I have yet to see any real advancements that I think can allow these cars to “reason under uncertainty.” That’s where I draw the line. The cars must be able to reason under uncertainty, whether that uncertainty is weather, human behavior, different failure modes in the car. If cars cannot figure out what to do at least to fail gracefully, then it’s going to be a long time before this technology is ready.
Michael Chui: Just reflecting on it, it’s a funny thing about human expectation, right? At first, we thought this problem was super hard. Then these breakthroughs you mentioned overnight. And maybe everybody thought, “Oh, it’s a lot easier than we thought.” And now we’re rediscovering that the last X percent is really, really hard. Is that something that you see in common in other technologies as well? Is this just something we should expect, this hype cycle that happens, and we overshoot in both directions?
Missy Cummings: I think that this hypercompetitive race has made people focus on the wrong parts of the technology that needs the care and feeding to make it work and/or to get derivative technologies that can be useful. We may be able to get slow-speed technology—slow, meaning for autonomous shuttles—out of this. Maybe some very limited geo-fenced operations for robo-taxis.
But we’re not going to get the kind of full reach that either car companies or the Silicon Valley companies like Cruise and Waymo think that we’re going to get. We’re going to fall short of that goal. I’m not sure yet what the spinout technologies will be. But I think that one day we are going to look back, and the self-driving race to the bottom, I think it’s going to become a really important set of Harvard Business [School] case studies.
Michael Chui: You mentioned that these companies are focusing on the wrong problems. What are the right problems to solve?
Missy Cummings: The right problems, specifically when we’re talking about the surface transportation world, including trucking and cars, is looking at collaboration between sensors and humans. Letting the system prevent humans from doing stupid things like falling asleep at the wheel, texting while driving, because if humans do these things, the car can then react and keep the car in a safe place. If nothing else, pull it over to the side of the road and flash the blinkers until the human can take charge in the way that they need to. So this idea of technology acting as a guardian.
How can we have humans and technology work together to buffer the limitations that both have, while capitalizing on the strengths of one another?
Michael Chui: I think you’ve also mooted another taxonomy of things that you’d want to see happen for these things to be on the road. What do you want?
Missy Cummings: I know there’s a whole set of parents out there that are with me. I call myself Eeyore sometimes about the status of self-driving cars in the future.
What I truly want, being the mother of a 14-year-old, is for self-driving cars to be here. I don’t want my 14-year-old daughter behind the wheel of a car ever. I do research in this field. I do understand how bad human drivers are. What I want would be for this technology to work, but understanding my job is to do research in this field, I recognize that it’s not going to happen.
What I foresee is that there is—not just in automotive transportation, but also in medicine and the military, finance—we are going to see a very distinct shift away from replacing human reasoning to augmenting reasoning. That’s where the real future is. That’s where the money is going to be made. Because people think they’re doing it, but they’re not really doing it, and they’re not doing it in the right way.
If I hear somebody try to tell me how they’re doing explainable AI one more time, I’m just going to go bonkers. Explainable AI is so much more than just trying to show weights or thresholds or various capabilities of an underlying algorithm. Explainable AI means different things to the different users that may come into contact with AI. I think that there’s an entire growth area in real explainable AI.
There’s also going to be a huge growth area in the maintenance of any kind of computer system that has underlying AI in it, including robots. I think robot maintenance is going to be one of the biggest growth areas in the next 20 years. We cannot keep all the robots that we have right now working. And we’re not thinking about maintenance in a way that’s streamlined, that can pull from the resources of the typical socioeconomic class that’s doing maintenance now on regular forklifts. We’re going to have to figure out how education needs to change so that we can help people lift themselves up by their bootstraps.
Michael Chui: So if you don’t mind, I’ll jump into the lightning round of quick questions, quick answers. Here we go. What is your favorite source of information to keep up to date on autonomy?
Missy Cummings: Wow. That’s really hard. I have a lot of sources of information. I don’t have one. I am very fortunate that I have a rich network of former students and peers and colleagues, and I work in academia. So whether it’s an email or a newsletter or a headline or a Twitter feed, it takes a lot of different sources for me to stay on top, that’s for sure.
Michael Chui: What excites you most about advances in technology?
Missy Cummings: I know this is going to sound a little weird. I love it when humans start to figure out ways around technology. So I’ve been a real critic of Tesla. But every time I see a new way that somebody’s tried to figure out how to defeat the Tesla autopilot bag, it does kind of tickle me a little bit, because I always appreciate that even if there’s no real purpose for it, humans are always trying to figure out, even if they don’t recognize it, how to outsmart a computer. I just love that about humans [laughs].
Michael Chui: What worries you most about technology advancement?
Missy Cummings: AI illiteracy, tech illiteracy. I see so much. I hung out in the World Economic Forum as a head of one of their committees for a couple years. I spent a lot of time with C-suite people. Tech illiteracy scares the pants off me.
Michael Chui: If you could fly any machine, what would it be?
Missy Cummings: A Warthog, an A-10 [laughs].
Michael Chui: If you were the head of R&D for an aerospace company, what would you prioritize?
Missy Cummings: Well, I need a qualification. What kind of aerospace company?
Michael Chui: Civilian.
Missy Cummings: Like, commercial passenger?
Michael Chui: Yes.
Missy Cummings: The passenger experience.
Michael Chui: If you were head of R&D for an automotive company, what would you prioritize?
Missy Cummings: Guardian technology in the car.
Michael Chui: What regulation is most important to implement?
Missy Cummings: Certification and testing for AI.
Michael Chui: In what company would you be most interested in investing?
Missy Cummings: I am definitely not going to wade into those waters. No, no, no. I’ll have everybody and his brother calling me out for a conflict of interest [laughs].
Michael Chui: OK. What would you recommend that a college student study?
Missy Cummings: Computer science.
Michael Chui: What would you be doing if you weren’t a professor?
Missy Cummings: I would be leading outdoor adventures.
Michael Chui: What kind of adventures?
Missy Cummings: Oh, kind of a smorgasbord, hiking, snowboarding, whitewater rafting.
Michael Chui: That’s great. And what one piece of advice do you have for our listeners?
Missy Cummings: Stay curious.
Michael Chui: Missy Cummings, thank you so much for your insights.
Missy Cummings: Thank you so much for having me.