Azure Platform Engineer
Xebia sp. z o.o.
⚲ Rzeszów, Wrocław, Gdańsk, Warszawa
20 000 - 33 000 PLN (B2B)
Wymagania
- Azure
- Azure Databricks
- Terraform
- CI/CD
- Microsoft Azure
Opis stanowiska
O projekcie: Who We Are While Xebia is a global tech company, our journey in CEE started with two Polish companies – PGS Software, known for world-class cloud and software solutions, and GetInData, a pioneer in Big Data. Today, we’re a team of 1,000+ experts delivering top-notch work across cloud, data, and software. And we’re just getting started. What We Do We work on projects that matter – and that make a difference. From fintech and e-commerce to aviation, logistics, media, and fashion, we help our clients build scalable platforms, data and AI solutions, and cutting-edge applications to shape the future of tech. Our clients include McLaren, Aviva, Deloitte, Spotify, Disney, ING, UPS, Tesco, Truecaller, AllSaints, Volotea, Schmitz Cargobull, Allegro, InPost, and many, many more. We value smart tech, real ownership, and continuous growth. We use modern, open-source stacks, and we’re proud to be trusted partners of Databricks, dbt, Snowflake, Azure, GCP, and AWS. Fun fact: we were the first AWS Premier Partner in Poland! Beyond Projects What makes Xebia special? Our community. We support tech communities, organize meetups (Software Talks, Data Tech Talks), and have a culture that actively support your growth via Guilds, Labs, and personal development budgets — for both tech and soft skills. It’s not just a job. It’s a place to grow. What sets us apart? Our mindset. Our vibe. Our people. And while that’s hard to capture in text – come visit us and see for yourself. Wymagania: - ready to join within 2 weeks,- strong experience as a Platform Engineer / DevOps Engineer,- hands-on experience with Microsoft Azure (compute, networking, storage, security),- practical experience in building and operating data platforms (especially analytics-focused),- strong hands-on experience with Azure Databricks, including platform foundations (workspace setup, connectivity, security/identity, cluster policies, automation),- prior experience with Microsoft Fabric, or at least strong working knowledge (the target architecture includes Fabric components),- experience with infrastructure as code (e.g. Terraform),- knowledge of CI/CD pipelines and automation Good understanding of data concepts (data lakes, data warehouses, ETL/ELT, analytics workloads),- ability to work independently and take ownership of platform-related topics,- experience enabling ingestion from a wide variety of sources, including Autodesk Construction Cloud (ACC), Google Drive, SharePoint, Dataverse, Amazon S3, and on-prem file shares, including handling authentication and network/security constraints (e.g., private connectivity, proxies, IP allowlists),- knowledge of monitoring, logging, and alerting in cloud environments,- understanding of security and governance in data platforms,- practical experience using AI-powered assistants (e.g. Claude Code, GitHub Copilot, Cursor) to improve productivity, quality, or decision-making in software delivery.- upper intermediate/advanced English (B2/C1).- Work from the European Union region and a work permit are required. Nice to have: - experience with Microsoft Purview or data governance tools,- familiarity with FinOps / cost optimization in Azure,- background in supporting data engineering or analytics teams,- experience working in international or distributed teams,- Experience working in enterprise-scale environments,- experience applying GenAI in a more structured way within the SDLC, including defined workflows, prompt patterns, or tool integrations embedded into daily work,- interest in and familiarity with emerging AI-driven practices (e.g. agent-based workflows, automation patterns, AI-augmented development), with a willingness to explore and experiment beyond standard approaches. Codzienne zadania: - designing and implementing a cloud-based data platform in Azure, centered on Azure Databricks and integrating Fabric components where planned, - building and maintaining platform components, CI/CD pipelines, and infrastructure as code, - supporting data engineers, analysts, and data scientists by providing stable and scalable platform services, - ensuring security, monitoring, cost optimization, and operational best practices, - designing and implementing secure connectivity and ingestion patterns from heterogeneous and unusual sources (incl. cross-cloud and on-prem), often under enterprise network/security constraints, - working as part of the core delivery team, collaborating closely with data, cloud, and architecture colleagues, and aligning interfaces/requirements with the AI consumption team (the main external stakeholder), - supporting the evolution of the platform towards AI-ready data serving, including potential semantic layer components where applicable.