Senior Databricks Data Engineer
Luxoft DXC
⚲ Krakow, Warsaw
Wymagania
- PySpark
- SQL
- Delta Lake
Opis stanowiska
We are seeking a Senior Data Engineer with strong hands-on expertise in Databricks, PySpark, and cloud-based data platforms to support the development, migration, and optimization of our enterprise data platform within the investment domain. This role will focus on building and maintaining scalable data pipelines and lakehouse data models that support investment analytics, portfolio management, risk analysis, and trading data workflows. The successful candidate will work closely with data engineers, quantitative analysts, and investment stakeholders to deliver high-quality, reliable, and performant data solutions. Experience with financial datasets such as market data, portfolio holdings, transactions, pricing data, and risk metrics is highly valuable. Data Engineering & Pipeline Development: - Build, optimize, and maintain end-to-end data pipelines using Databricks, PySpark, and SQL across ingestion, curation, and consumption layers. - Develop and manage Declarative Pipelines (e.g., Lakeflow / DLT-style pipelines) to support scalable incremental processing and operational reliability. - Implement robust transformations and modelling patterns to deliver trusted datasets for downstream consumption (analytics, operations, reporting, applications). Data Quality, Controls & Operational Excellence: - Implement data quality validation, monitoring, reconciliation, and alerting to ensure datasets meet required standards for completeness, accuracy, timeliness, and consistency. - Debug pipeline failures, resolve production incidents, and continuously improve pipeline stability, performance, and cost efficiency. - Apply best practices around auditability, lineage, and data correctness — particularly in time-series and historically tracked datasets. Data Modelling & Domain Delivery - Contribute to the design and evolution of data models supporting the organization’s investment footprint (Public Markets, Private Markets, reference/master data, corporate actions, portfolio, pricing, risk, etc.). - Partner with business stakeholders to translate requirements into implementable data solutions while preserving maintainability and governance standards. - Support integration of multi-vendor and internal data sources into curated datasets that align with ADIA’s operational and analytical needs. Platform & Engineering Standards - Follow and enhance engineering standards for version control, CI/CD, testing, documentation, and secure development practices. - Optimize compute and storage usage through partitioning/clustering strategies, incremental patterns, and performance tuning. - Contribute reusable libraries, patterns, templates, and approaches that improve delivery speed and consistency across the team.