You will work in our McKinsey Client Capabilities Network in EMEA and will be part of our Wave Transformatics team, based in the Brussels office.
Wave is a McKinsey SaaS product that equips clients to successfully manage improvement programs and transformations. Focused on business impact, Wave allows clients to track the impact of individual initiatives and understand how they affect longer term goals. It is a mix of an intuitive interface and McKinsey business expertise that gives clients a simple and insightful picture of what can otherwise be a complex process by allowing them to track the progress and performance of initiatives against business goals, budget and time frames.
Our Transformatics team builds data and AI products to provide analytics insights to clients and McKinsey teams involved in transformation programs across the globe. The current team is composed of data engineers, data scientists and project managers who are spread across several geographies. The team covers a variety of industries, functions, analytics methodologies and platforms – e.g. Cloud data engineering, advanced statistics, machine learning, predictive analytics, MLOps and generative AI.
As a member of the team, you will be responsible for designing, building, and optimizing scalable data solutions that power analytics, reporting, and machine learning. Working alongside a team of data engineers across global hubs, you will lead the development of robust data ingestion pipelines, procuring data from APIs and integrating it into cloud-based storage layers while ensuring data quality through rigorous cleaning and standardization.
In this role, you will play a key part in building next-generation cloud-based data platforms that enable rapid data access for business stakeholders and support the incubation of emerging technologies. Your work will focus on designing and developing scalable, reusable data products that serve as the foundation for analytics, reporting, and machine learning pipelines.
You will lead the architecture and optimization of data pipelines, driving the design and enhancement of ETL/ELT workflows using tools such as AWS Lambda, AWS Glue, Snowflake, and Databricks. Ensuring high performance, scalability, and cost efficiency in data processing solutions will be a critical aspect of your responsibilities. Managing and scaling cloud-based data infrastructure will also be central to your role, including configuring optimized storage solutions like S3, Snowflake, and Delta Lake, and overseeing computation resources to support efficient data operations.
Your expertise will be instrumental in implementing advanced performance optimization techniques, such as query tuning, indexing strategies, partitioning, and caching, to maximize efficiency in platforms like Snowflake and Databricks. Collaboration will be a key focus, as you work closely with data scientists, engineers, and business teams to deliver well-structured, analytics-ready datasets that drive insights and power machine learning initiatives.
In addition to technical responsibilities, you will establish and enforce data governance practices, ensuring compliance with industry standards such as SOC 2 and GDPR. This includes implementing robust access controls, tracking data lineage, and maintaining encryption standards to safeguard data security. Automation and monitoring will be integral to your work, as you build resilient, automated workflows using tools like Step Functions and Databricks Workflows, while also implementing proactive monitoring, logging, and alerting systems to ensure reliability and data quality.
Mentorship and innovation will be key aspects of your role, as you guide junior engineers, contribute to internal knowledge-sharing initiatives, and stay ahead of emerging technologies. By championing continuous improvement in data engineering methodologies, you will help drive innovation and elevate the organization’s data capabilities to new heights.