Back to Alumni News

The promise of AI

An alum discusses tech literacy, applications in mental health, and completing Beethoven's 10th Symphony.
We strive to provide individuals with disabilities equal access to our website. If you would like information about this content we will be happy to work with you. Please email us at: McKinsey_Website_Accessibility@mckinsey.com
Headshot by Oliver Betke

In this thought-provoking interview, alumna Anastassia Lauterbach, an expert in gen AI and technology, shares her insights on the intersection of AI and mental health, the challenges of developing complex AI projects, and her mission to educate and entertain through her company, AI Entertainment. Anastassia emphasizes the need for a realistic understanding of AI's capabilities and limitations, while also highlighting the ethical concerns and risks associated with its use. With a focus on human-centricity and the importance of technology literacy, Anastassia offers valuable perspectives on the integration of AI technology in a responsible and beneficial manner.

 

You’ve recently become involved with the Global Mental Health Task Force. How can AI intersect with mental health help?

When we talk about technology use, there’s the magical triangle: demand, supply, and maturity. In terms of the demand, in 2023, 21% of the adult population in the United States reported that they had anxiety attacks or some kind of depression. And we have one mental health therapist for 350 potential patients, which doesn’t cover the demand.

Looking at the supply side, there are chatbots, which, in my opinion, aren't great. But we can also have a mix of language models or chatbots and AR applications: virtual reality or augmented reality applications. This is something that could be used with great success for people who have specific mental health conditions. For example, people who are anxious, or fearful about height, could use special virtual-reality glasses with a chatbot, and do some exercises. I've seen results from China, where it's being said that after 10 sessions, patients’ conditions really improved.

Now we come to my favorite topic: the maturity of technology. If we think about a helper who would guide someone through an anxiety attack, for example, I think we would envision someone who possesses a certain level of intelligence. And the issue is that as far as gen AI or deep learning models are concerned, none of it is intelligent. And these models will always hallucinate. Usually we can spot hallucinations easily, but in certain circumstances, we might not be able to see it, and this could present a risk.

We can't cut corners and create a mental health assistant in a short time. It won't happen. This is not just telling people what pills to take in what order, which can be done quite easily. But to have a real conversation, with a human response, we need to be realistic about what current AI models can deliver.

Can you give us an example of a surprising application of AI to tackle a complex problem?

We strive to provide individuals with disabilities equal access to our website. If you would like information about this content we will be happy to work with you. Please email us at: McKinsey_Website_Accessibility@mckinsey.com
The 19th century meets the 21st century: A team used AI to create Beethoven’s 10th Symphony

The creation of Beethoven’s 10th Symphony. Beethoven wrote nine symphonies, and he left us a half of a page of a scherzo after his death, which we transformed into a whole new symphony. It took us two years to build the dataset to curate the data for this music. We had a human composer involved, Walter Werzowa, who studied Beethoven, and who could improvise with his style of music. The dataset was just Beethoven’s music, plus the music of other contemporary and influential composers from his time. This collaboration between the human composer and the machine produced a magnificent result. But it took two years to get the data set right, and another year to finish the work.

Tell us about your new company, AI Entertainment.

Technology literacy is paramount. We need to understand the basics of AI and robotic technologies. For example, what does it mean to apply deep learning? What is regression? What is hallucination? I started the company because I believe we need to provide knowledge to people while also entertaining them. And we must start very early, because I think only those who are growing up with this knowledge will be capable of giving us different architectures and types of AI.

We strive to provide individuals with disabilities equal access to our website. If you would like information about this content we will be happy to work with you. Please email us at: McKinsey_Website_Accessibility@mckinsey.com
Anastassia’s book introduces children to the concept of AI and how to use it properly

So I decided to start with children, and I thought it would be great to build a story where AI is as natural as a lake or sunshine. I write about a family who has a speaking robot. In my book, Romy And Roby and the Secrets Of Sleep, the robot writes a poem about Rome, and he places the Eiffel Tower, which is obviously in Paris, France, next to the Colosseum in Rome. This was a real mistake that ChatGPT 3 made. The story goes on to explain how the robot is trained, and he succeeds in embracing his new knowledge.

This a shortcut to a very advanced technology. Everything that happens in the book is rooted in technologies which exist today. Behind the children’s story, there's science, which I try to explain in very practical words.

I also launched a podcast in April, and I am starting to partner with great educators, schools, and universities.

What are some specific risks and concerns regarding the ethical use of AI?

There are three buckets of risks when it comes to AI. The first is the design risk: we are humans, and humans make mistakes. Every dataset is biased, and if we try to de-bias a dataset, we create new biases.

The second bucket is AI in the wrong hands, which is my greatest concern. Deepfakes play a huge role there. The amount of cyberattacks is increasing, and it’s not just large corporations that are affected; even small businesses go out of business because of ransomware attacks. Regulators must pay attention, and provide and insist on frameworks that will force businesses to be more prudent about managing digital assets, and managing the data risk. AI will be great in the hands of those trying to protect us, and unfortunately it will help those who will try to harm us.

The third bucket is “the human in the loop.” For example, I think it will take a long time before we come to a truly self-driving car, because our world is messy. Life is messy. Human copilots will be needed to tweak and to correct mistakes on the go. It’s so tremendously difficult to build safe systems that will incorporate all the messiness of the world and function properly.

What are some key recommendations or strategies you have to ensure that the integration of AI technology aligns with ethics and human values?

There isn't one silver bullet to deliver perfection. Pablo Picasso is attributed with the saying, "Computers are useless. They can only give you answers." So it's up to humans to formulate the right questions.

We need to pay attention to ethics, which can be tricky – for example, what is the geographical context? Is something that is ethical in Europe ethical in the Middle East? Can we blend and find a common denominator in our value systems? And do we agree on what we are trying to solve for?

I think that the role of HR needs to grow in organizations. Human Resources should be about talent development and figuring out how to make collaborative business models between humans and machines – like, for example, the Beethoven symphony was.

Human Resources functions must utilize everything at hand, and be informed about what is out there, and provide knowledge to its employees. Things will improve when more of the population is using, talking about, and thinking about AI.

You've said that there are misconceptions in how AI drives innovations. What are a few of those misconceptions?

People aren't really realistic about the economics of AI. As I’ve mentioned, it takes a long time to build a good dataset, and time is money. Unlike so many technologies, AI isn't a tool that comes in a shiny box, which you can use as soon as you open it.

There are hard economics, because deep learning and gen AI models require a lot of energy from data centers. So if you don't possess your own infrastructure of data centers, you might spend 25% of your revenue base on cloud.

In addition to that, you might spend 15% of your revenue base on cleaning your data and preparing the data for modeling.

We are just at the very beginning. I hope there will be innovation on the infrastructural and hardware side to make the whole thing easier. It's a marathon, not a sprint.

Any other thoughts you’d like to share?

We need to be in a perpetual education mode. Companies need to find new models to provide time and support for education, because I think people are feeling lost. There’s just too much noise and confusion out there.

Having said that, I'm a huge proponent of AI. I think it can enable a lot of problem solving in the world. For example, AI has solved one of biology's biggest challenges: predicting how proteins fold from a chain of amino acids into 3D models of protein structures. This is a huge shortcut to building new drugs and discovering new molecules. There are so many opportunities I'm excited about. But we really must invest in technology literacy.

Related articles

Meet Lilli, our generative AI tool that’s a researcher, a time saver, and an inspiration

– The Firm's new generative AI tool can scan thousands of documents in seconds, helping us deliver the best of our knowledge to... our clients.

Font of inspiration

– Alum Henry Zhang boasts a host of talents – including one that led to a collaboration with a major fashion house to celebrate... the Year of the Dragon.