Burgeoning data analyzed by ever more intelligent machines are opening pathways to surprising applications and providing solutions to problems that have been out of reach. In the film industry, machines “watch” movies and videos, charting their emotional intensity and giving content creators clues about to how to make stories more appealing. And in banking, AI’s ability to detect anomalies among millions of transactions helps bank risk officers eliminate false positives that are a drain on productivity. For a growing number of industries, AI is tilting the playing field—you’ll need to understand how before your competitors do.
Using artificial intelligence to engage audiences
By Eric Chu, Deb Roy, and Jonathan Dunn
Machine-learning models can help screenwriters and directors fine-tune scripts and imagery. Company communicators should take note.
Master storytellers are skilled at eliciting our emotions, but even the best sometimes miss the mark. Could machines, using artificial-intelligence (AI) capabilities, collaborate with writers to improve their stories?
McKinsey and the Massachusetts Institute of Technology Media Lab recently studied that question, focusing on movies and videos. We speculated that a story’s emotional arc—shifts in tension and emotion that shape a narrative as it progresses and develops—determines viewer engagement. To test our theory, we developed machine-learning models to “watch” small slices of video and estimate their emotional content. When the content of all the slices are considered together, the story’s emotional arc emerges. The models can evaluate audio and visual elements in isolation or together.
Consider the opening sequence of the movie Up, which provides the backstory for Carl, the main character. The visual valence—or the extent to which an image elicits positive or negative emotions—alternates throughout the opening sequence (Exhibit 1). The valence plummets, for instance, when Carl returns home after his wife, Ellie, dies.
After analyzing data for thousands of videos, we classified stories into families based on their emotional arc. Some families had stories with extremely positive endings, and these tended to generate the most comments on social media (Exhibit 2). This finding supports prior research showing that positive feelings generate the greatest audience engagement.
Our results suggest that AI could play a supporting role in video creation. As always, human storytellers would create a screenplay with clever plot twists and realistic dialogue. AI would enhance their work by providing insights that increase a story’s emotional pull—for instance, identifying a musical score or visual image that helps engender feelings of hope. This breakthrough technology could supercharge storytellers, and not just in the movie business. For example, AI insights could potentially improve the emotional pull of commercials or corporate communications.
About the authors
Eric Chu is a doctoral candidate at the Massachusetts Institute of Technology and conducts research at the Laboratory for Social Machines, part of MIT’s Media Lab, where Deb Roy is the director. Jonathan Dunn is a partner in McKinsey’s New York and Southern California offices.
The authors wish to thank Geoffrey Sands and MIT Media Lab’s Russell Stevens for their contributions to this article.
For the full report on which this article is based, see “AI in storytelling: Machines as cocreators.”
Monitoring money-laundering risk with machine learning
By Piotr Kaminski and Jeff Schonert
More robust algorithms applied to better data can reduce the false positives that drive up banks’ costs of policing risk.
Money laundering is a low-frequency event, but banks can pay a high price for missing an incident. To detect money laundering, banks deploy monitoring systems to alert them of atypical transactions. Based on certain criteria, a financial investigations unit then attempts to identify likely instances of money laundering from among the alerts, filing suspicious-activity reports with appropriate authorities as needed.
But anti–money laundering (AML) operations are often hampered by high levels of false positives—much higher than you would expect. Here’s why: a very effective transaction-monitoring system might be 95 percent specific for suspicious activity and 95 percent accurate in detecting it. This means that the control falsely detects suspicious activity in 5 percent of normal cases while flagging 5 percent of all activity as not conforming to the established criteria. In those cases, further work will be needed to determine whether they are legitimate or suspicious. If, after all, 0.1 percent of transactions truly meet the criteria for suspicious activity (one in 1,000 among the 50 in 1,000 falsely flagged), then this particular control will have produced a false-positive rate of more than 98 percent. Fewer than 2 percent of alerts will correspond to activity that upon further examination qualifies as suspicious.
At one large US bank, the false-positive rate in AML alerts was very high. The elaborate remedial process and meager result was overtaxing resources. To improve the up-front specificity of its tests so that AML expertise could be better utilized, the bank looked at the data and algorithms it was using. It discovered that databases identifying customers and transactions lacked key information. By adding more data elements and linking systems through machine-learning techniques, the bank achieved a more complete understanding of the transactions being monitored.
It turned out that more than half of the cases alerted for investigation were perfectly innocuous intracompany transactions. With a more complete database, the bank was able to keep its monitoring system from issuing alerts for these transactions, which substantially freed resources to fight actual money laundering and fraud (exhibit). Combined with better data, machine learning and other forms of artificial intelligence can also be used to combat false positives in a variety of banking activities—such as those that mine data for an individual’s creditworthiness or probe digital interactions for signs of cybersecurity threats.
About the authors
Piotr Kaminski is a senior partner in McKinsey’s New York office, where Jeff Schonert is an associate partner.
For the full article, see “The neglected art of risk detection.”