AI business applications and ethical issues in focus at O’Reilly AI conference

by Laura DeLallo

The power of machine learning to tackle myriad business use cases topped the agenda of the AI conference presented by O’Reilly and Intel AI in New York City last week. But the ethical issues surrounding artificial intelligence (AI), such as data privacy, and some cutting-edge technologies also took center stage.

Tech providers and enterprises from across industries shared how they use machine learning, the most prevalent AI technique in practice today, and its subset, deep learning, to help serve customers and optimize processes. We saw everything from financial applications such as consumer-loan processing and fraud detection to creative applications such as producing music to several types of personalization, which included a peek inside the technology that personalizes Facebook’s news feed.

We also saw an indication that some companies are exploring business applications of more nascent AI techniques, such as reinforcement learning, which is akin to training an algorithm using a trial-and-error methodology. In a deep-dive session, Mark Hammond, CEO of deep reinforcement learning platform provider Bonsai, showed how his company worked with Siemens to develop an AI model that can calibrate a computer numerical control (CNC) machine 30 times faster than a human engineer can. Siemens hopes to use the application to reduce equipment downtime.

Beyond the technology: Bias, explainability, and privacy

Session topics and attendee questions reflected concern about some of the issues swirling around artificial intelligence. AI academics and practitioners alike spoke about the difficulty in preventing machine-learning models from becoming biased. Jana Eggers, CEO of AI platform provider Nara Logics, explained that one of the best defenses is having a culturally diverse set of people working on AI data collection and models. Still, that approach isn’t foolproof, she said; only through extensive testing can biases be surfaced and corrected. Olga Russakovsky, assistant computer science professor at Princeton and computer vision specialist, pointed out the virtual impossibility of manually sifting through, for example, image data to surface and weed out bias. “You need a model to detect bias, but then how do you know if the model itself is unbiased?” she said.

Another thorny problem in AI, particularly for complex deep-learning models, is enabling humans to understand how a model came to a particular conclusion. Achieving model “explainability” is important to help humans gain trust in AI and is a near necessity in some industries such as medicine and financial services—the former because doctors need to understand the reasoning behind potentially life-or-death AI recommendations, and the latter for meeting regulatory requirements in some cases.

Companies leveraging and creating AI applications recognize the need for more transparency in AI models. Uber is among the organizations investing resources in exploring explainability, said its chief scientist Zoubin Ghahramani. He thinks that eventually AI systems can become more interpretable than humans. “Really, how interpretable are human actions?” he said.

With recent data privacy controversies fresh on everyone’s mind and the General Data Protection Regulation (GDPR) just weeks away from enforcement, data privacy discussions emerged in many sessions, albeit with more questions than answers, given continued confusion around the regulation. In their GDPR deep-dive session, McKinsey partner Kayvaun Rowshankish and associate partner Alexis Trittipo explained how organizations can take steps to comply with the regulation when using AI, such as strengthening model validation and control and anchoring to supervised techniques when personally identifiable information is being used. They also shared ways that AI could actually help with GDPR compliance, such as by automating responses to data subject rights requests.

The most futuristic AI application we saw

During a keynote session, Thomas Reardon, cofounder and CEO of CTRL-Labs, presented his company’s cutting-edge work on neural interfaces. His team has developed a bracelet that picks up signals from the body’s motor units and uses neural networks to interpret intended movements—what Reardon calls “intention capture.” The results were quite stunning. In one video demonstration, a person wearing the bracelet played the Atari game Asteroids with his hand resting on a desk and virtually no hand movements—the bracelet simply picked up and executed the moves that the player thought about making and signaled through his muscle fibers. In another demonstration, a bracelet wearer “typed” using finger movements but no actual keyboard. Reardon explained that one day, he expects his technology will eliminate the need for people to interact with physical devices such as keyboards and cell phones. Reardon says CTRL-Labs will sell the first version of the bracelet technology by the end of the year.

Laura DeLallo is the senior editor for McKinsey Analytics and is based in the Stamford office.