Over a century of technological progress, some of the most important advances have come during periods of crisis. The second decade of McKinsey’s journey, 1936 to 1946, illustrates that principle. Forged in the crucible of a global war, the technological developments of this decade—including the first programmable computer—accelerated innovation in ways analogous to today’s AI boom.
Beyond computing, this decade also saw advances in aviation, energy, and infrastructure that expanded people’s understanding of what technology could do and how it could be applied. For today’s technology leaders, the parallels are direct. The challenge shared by all advanced technologies, including agentic AI and quantum computing, is to move from breakthroughs to broad application.
Computing introduces programmability into machines
The development of the first programmable computers stands out as the biggest breakthrough of the decade. In 1936, Alan Turing described the concept of a universal machine, establishing the theoretical basis for programmable computing. Within a few years, that idea began to take practical form. Early computing systems such as Konrad Zuse’s Z3 (completed in 1941) and the British Colossus (developed between 1943 and 1944) were designed for calculations and codebreaking. These early computers were large and costly, but they introduced a new concept. For the first time, machines could be programmed to perform different tasks without their hardware being physically reconfigured. This separation of logic from machinery established the foundation for modern computing.
The lesson for modern technology leaders is clear: When a technology becomes programmable, its range of applications expands rapidly. AI is following a similar path. Early deployments are focused on specific use cases. But as capabilities mature, the opportunity shifts toward autonomous AI agents reinventing entire workflows and end-to-end processes across the enterprise. Capturing that value requires hard work to update data, platforms, and operating models alongside the core technology.
Aviation enters the jet age
Perhaps not surprising given the all-encompassing world war, aviation took great leaps forward during this period. The decade saw the introduction of jet propulsion, with the first operational jet-powered aircraft emerging in the early 1940s. In Germany, the Messerschmitt Me 262 entered service in 1944 as the first operational jet fighter, while in the United Kingdom, Frank Whittle’s jet engine powered the Gloster Meteor. It began flying combat missions the same year. Earlier aircraft had already demonstrated the viability of powered flight, but jet engines extended their speed, range, and performance.
These jet engines, initially developed for military aircraft, required advances in materials, engineering, and maintenance that seemed impossible only years before. In the following decades, jet propulsion became the standard for commercial aviation. Aircraft such as the de Havilland Comet in the early 1950s and the Boeing 707 later in the decade brought jet travel to civilians and cracked open the era of global trade.
For today’s CTOs, the trajectory is familiar. Jet propulsion did not transform aviation on day one; its impact came later when it was engineered for reliability and standardized for commercial use. The same dynamic is now playing out with AI. Many advanced capabilities are emerging in narrow use cases, but the real value will come when AI is deployed consistently across the enterprise. That shift will require advances in system design and integration. It will also require successful change management, helping people see AI as safe and useful—much as earlier generations had to become comfortable with the idea of flying in a metal tube.
Energy unlocks unprecedented power and complexity
The discovery of nuclear fission introduced a new source of energy with much higher power density than previous technologies. In 1938, chemists Otto Hahn and Fritz Strassmann first demonstrated nuclear fission, with physicists Lise Meitner and Otto Frisch providing the theoretical explanation shortly after. Within a few years, this scientific breakthrough moved rapidly from theory to application. The Manhattan Project was launched in 1942 and culminated in the first atomic bombs in 1945. These weapons altered the course of history and demonstrated both the scale of nuclear’s potential and the magnitude of its risks.
In subsequent years, nuclear energy was adapted for civilian use. The first nuclear power plants began operating in the 1950s, and over time, nuclear power became part of the energy mix in many countries. What defines nuclear technology is the level of control it requires. Its benefits are substantial, but so are the consequences of failure. Managing it safely depends on rigorous engineering, disciplined operating practices, and strong regulatory oversight.
The relevance for today’s technology leaders is direct: As technologies become more powerful, the need for responsible use increases. When it comes to AI, companies that embed governance and accountability into their systems are better positioned to scale and sustain trust.
Infrastructure enables mobility at scale
The 1936–1946 period also saw the expansion of transportation infrastructure, particularly in railways and highways. In Germany, the Autobahn network was extended in the late 1930s to enable rapid travel across regions. In the United States and the United Kingdom, wartime demands drove tighter coordination of rail systems and the growth of long-distance trucking. These networks were no longer managed locally. They were built to support large-scale mobilization, moving troops, equipment, and materials quickly in a coordinated way. Rail schedules were aligned. Roads were better connected. Logistics operations were managed centrally to reduce bottlenecks. Over time, these systems became the backbone of economic activity. The innovation came from how a tightly integrated network functioned as a whole.
For today’s technology leaders, the comparison to digital infrastructure is direct: Cloud platforms and data systems are today’s most critical networks. CIOs must manage a growing ecosystem of technology partners while ensuring that their enterprise architectures integrate with cloud and AI platforms and that their data is ready for agentic workloads.
Rocketry extends the boundaries of what is possible
Advances in rocketry during this period pushed engineering progress into new territory. In Germany, Wernher von Braun and his team developed the V-2 rocket, which was the first long-range ballistic missile launched successfully, in 1942. Like other technological developments of this period, these early rocket systems were designed for the military and had little application beyond that context. But they demonstrated totally new capabilities that, not long after, opened up the cosmos for exploration. The launch of Sputnik in 1957 marked the beginning of the space age, followed by rapid advances in satellite technology.
Early rocket scientists would never have predicted their work would underpin global communications networks. This dynamic is common in emerging technologies that rely on engineering breakthroughs. The first applications reflect immediate needs, but as capabilities improve and costs decline, new use cases emerge and new value is created.
For today’s CIOs, the progression of rocketry highlights a familiar challenge. Technologies that begin with narrow use cases can expand into a wide range of enterprise applications, often faster than architectures, security models, and operating practices can keep up.
Early promise fails to scale
Not all technologies of the decade achieved lasting adoption. Airships were once seen as a viable—and almost magical—solution for long-distance travel, particularly in the early 1930s. That promise unraveled quickly after the Hindenburg disaster in 1937. Airplanes, while less elegant in some respects, ultimately proved more reliable and economically viable at scale. Airships could perform under controlled conditions, but they frayed under the more brittle demands of commercial deployment.
This paradox remains relevant today. AI has shown strong results in pilots, but few companies have successfully scaled it to deliver lasting value. Our research illustrates that gap: Some 88 percent of companies report regular AI use in at least one business function, but only one-third have started scaling their AI programs.
For technology leaders, the implication is practical. Scaling requires more than technical performance. It also requires strengthening the underlying operating muscle. Rushing to deploy agentic AI without adopting a flexible product and platform model could risk a Hindenburg-level resiliency failure. Success with cutting-edge technology depends on understanding how systems perform under real-world variability and whether the broader operating model can support sustained use.
A leadership lesson endures
During the 1936–1946 period, many of the most important technologies were developed through coordinated efforts rather than isolated invention. With the world at war, scientists and engineers mobilized to push progress to its outer limits in pursuit of strategic advantage. Progress depended on the ability to integrate disciplines and ensure that systems functioned under very difficult real-world conditions. Scaling AI requires the same discipline. Organizations that get this right will be the ones that turn today’s breakneck technological progress into controlled and sustained advantage.
Chandrasekhar Panda is a partner in McKinsey’s Riyadh office, Henning Soller is a partner in the Frankfurt office, Klemens Hjartar is a senior partner in the Copenhagen office, and Sven Blumberg is a senior partner in the Istanbul office.



