Despite uncertainties about global growth and geopolitical developments, the tone at Davos was upbeat and focused. Similar themes emerged as in years past, but a number of elements struck us as particularly noteworthy.
Tech is still hot, especially AI …
Technology has been a perennial hot topic at Davos, and that was no less true this year, though with a significant focus on artificial intelligence (AI)—and, in truth, the term “AI” was used to describe a broad set of advanced technologies (such as machine learning and natural language processing). Nevertheless, no one needs convincing that true AI is important, and some intriguing examples highlighted just how much it’s permeated businesses across many sectors. It’s being applied to reduce the amount of water needed to grow potatoes, to listen to shrimp feed in order to reduce excess feeding, and in garbage collecting through remote sensors to help route drivers so they use less gas. While these examples are cool and innovative, most of the focus of AI is really still on taking cost out of existing operating models and making them work better. There isn’t as much attention on longer-term disruption, such as, for example, using AI to develop new business models.
… but questions abound about how to make it pay off …
Two years ago, people were asking, “What is AI?” A year ago, people wanted to learn how to trial minimum viable AI-fueled products. This year, the questions focused on how to get more value from AI. There was clearly an emerging sense that AI is important, but people are investing a lot of money in it and not seeing the bottom-line impact they expected.
… and there’s a realization that a lot of hard work is needed
While it’s true that AI is “everywhere,” there’s still a lot to learn. That came through in a broad realization that, while some experiments can generate value, achieving scale is really hard work. Cleaning, sorting, and linking data—all required for AI to work—are laborious tasks and can take years for large businesses. Beyond that, using AI to its full effect requires wide-reaching and continuous efforts to change culture and how people work with and alongside AI technologies. Using AI to help run remote clinical trials, for example, requires leaders to address what the people who used to run the trials should do now.
A call for more responsible tech
It’s no exaggeration to say that everyone was aware of the need to address how to use tech more responsibly. Executives are feeling the need to take responsibility for building—and, in some cases, reestablishing—trust with their customers, while also being thoughtful about what role regulation might play. At a high level, concerns focused on three areas:
1. Solving trust concerns. The issues around personal data and privacy dominated discussion, and a number of ideas surfaced about how to address the problem. There was significant debate, for example, about how citizens can have more ownership over their data, and maybe even be paid for it. Other ideas included how demystifying tech could help create more trust with consumers. That included thinking about how interactions with customers could be simplified, from making contracts easier to understand to making it easy for customers to understand how tech is using their data.
2. Addressing bias. Executives are sensitive to the issue of bias in AI. There is a much clearer understanding today that all AI is trainable with data—but because all data lives in the past, and because past decisions have been made in biased ways, AI can perpetuate that bias. At a minimum, it’s important to drive explainable AI so that people can understand that rationale behind the algorithms and address potential sources of bias. Sophisticated AI algorithms for car insurance, for example, would cost one driver X and another driver in the same city Y. Being able to show that the variance is based on driver behavior rather than another factor, and providing a way for consumers to address those variances, would help.
3. Managing compliance. There was deep interest in making sure organizations comply with current rules and root out questionable behavior. Otherwise, executives worried, the environment could give rise to significant regulations that would affect everyone, not just the bad actors. Many countries, particularly in the West, are already considering significant regulations.
The East is rising
It was interesting to observe how far tech companies from the East have progressed. In many cases, it would not be an exaggeration to say they’ve overtaken the West, but many business leaders of Asian companies have a deep understanding of the technology, have long-term plans, and are bold with their ambitions.
This development was all the more palpable in the context of a potential surge of regulations in the West, which many believe would hamper Western companies’ ability to scale. This is a space to watch.
Overall, companies appeared sanguine about their own prospects. But beneath that was an undercurrent that the “risk barometer” has risen, with more sensitivity to privacy and tech responsibility issues providing businesses with less room for error.
Nicolaus Henke and Paul Willmott are senior partners in McKinsey’s London office.