Pracuj.pl Praca zdalna Expert

Lead Data Engineer (PySpark)

EPAM Systems (Poland) sp. z o.o.

⚲ Kraków, Grzegórzki

Wymagania

  • Azure Fabric
  • OneLake
  • Delta
  • OpenLake
  • Python
  • PySpark
  • SparkSQL
  • Cosmos DB
  • Azure DevOps
  • Microsoft Power BI

Opis stanowiska

Nasze wymagania: 9+ years of experience in data engineering with AI exposure Expertise in Azure Fabric and end-to-end Fabric experience Knowledge of OneLake (Delta / OpenLake) Advanced skills in Python, PySpark and SparkSQL Proficiency in Cosmos DB (NoSQL API) and other Cosmos DB variants Understanding of DF Gen2 and M-code Capability to implement CI/CD pipelines using Azure DevOps or equivalent tools Experience with generic Azure services and Power BI integration, semantic models and performance considerations Background in Agile or Scrum environments Strong ownership mindset and ability to lead by example Excellent communication skills for technical and non-technical audiences Good understanding of the financial domain Mile widziane: Experience with code generation, including non-AI and AI-assisted approaches Azure AI Foundry experience Data Science fundamentals and collaboration with DS teams Strong background in Big Data architectures and Spark ecosystems Familiarity with financial instruments and financial services data Hands-on experience with industry-standard LLMs (including GPT, Claude or similar) Exposure to AI-enabled data platforms and intelligent analytics use cases O projekcie: We are seeking a Lead Data Engineer with deep expertise in Microsoft Azure Fabric–based data platforms and AI-enabled data engineering. This role blends hands-on technical leadership with architectural and team mentoring responsibilities, focusing on modern data engineering, big data processing, and AI-driven workflows in complex enterprise environments. Zakres obowiązków: Lead the design, development and optimization of scalable data engineering solutions using Azure Fabric and cloud-native technologies Own end-to-end data pipelines including ingestion, transformation, storage and analytics Architect and implement solutions leveraging OneLake (Delta / OpenLake) and Fabric experiences Develop and optimize PySpark, SparkSQL and Python-based data processing pipelines Work with Cosmos DB (NoSQL API) and other Cosmos DB variants to support high-performance data access patterns Implement and maintain CI/CD pipelines and promote DevOps best practices Collaborate with data scientists, AI engineers and product stakeholders to enable AI-driven analytics and insights Mentor and guide junior engineers, setting coding standards and best practices Ensure data quality, security, governance and performance across platforms Contribute to technical decision-making and solution architecture discussions Oferujemy: Engineering community of industry professionals Friendly team and enjoyable working environment Flexible schedule and opportunity to work remotely within Poland Chance to work abroad for up to 60 days annually Business-driven relocation opportunities Outstanding career roadmap Leadership development, career advising, soft skills, and well-being programs Certification (GCP, Azure, AWS) Unlimited access to LinkedIn Learning, Get Abstract, Cloud Guru English language classes Stable income (Employment Contract or B2B) Participation in the Employee Stock Purchase Plan Benefits package (health insurance, multisport, shopping vouchers) Strategically located offices featuring entertainment and relaxation zones, table tennis and football, free snacks, fantastic coffee, and more Referral bonuses Corporate, social and well-being events