As a Data Scientist, you will collaborate with clients and interdisciplinary teams to develop impactful analytics solutions, optimize code, and solve real-world business problems across diverse industries. You’ll grow as a technologist by contributing to cutting-edge projects, R&D, and global conferences while working alongside world-class talent in a dynamic, innovative environment.
In this role, you will partner with clients to understand their needs and develop impactful analytics solutions. You will translate business problems into analytical challenges, build models to solve them, and ensure they are evaluated with relevant metrics. You’ll also contribute to internal tools, participate in R&D projects, and have opportunities to attend and present at conferences like NIPS and ICML.
Your work will create real-world impact. By identifying patterns in data and delivering innovative solutions, you will help clients maintain their competitive advantage and transform their day-to-day operations. Your contributions will directly influence business outcomes and drive lasting improvements across industries.
You will be in Amsterdam or Brussels, and collaborate closely with data scientists, data engineers, machine learning engineers, designers, and product managers around the world working on interdisciplinary projects that use math, statistics, and machine learning to derive insights from raw data. You will help global companies transform their businesses and enhance performance across industries such as healthcare, automotive, energy, and elite sports.
At McKinsey, you’ll thrive in an unparalleled environment for growth. You’ll develop a sought-after perspective by connecting technology and business value, tackle real-life challenges across diverse industries, and collaborate with inspiring multidisciplinary teams, gaining a holistic understanding of AI and its potential to drive transformation.
Our Tech Stack
While we advocate for using the right tech for the right task, we often leverage the following technologies: Python, PySpark, the PyData stack, SQL, Airflow, Databricks, our own open-source data pipelining framework called Kedro, Dask/RAPIDS, container technologies such as Docker and Kubernetes, cloud solutions such as AWS, GCP, and Azure, and more.