NoFluffJobs Stacjonarnie Senior

Senior Data Engineer

AVENGA (Agencja Pracy, nr KRAZ: 8448)

⚲ Wrocław

28 000 - 31 000 PLN (B2B)

Wymagania

  • Data pipelines
  • Azure Data
  • Azure Databricks
  • UNITY
  • Security
  • Python
  • Spark
  • ADLS
  • Azure Data Factory
  • Azure Synapse
  • API
  • REST API
  • ETL
  • Data modeling
  • CI/CD Pipelines
  • Git
  • Big Data (nice to have)
  • Databricks (nice to have)
  • Power BI (nice to have)

Opis stanowiska

Wymagania: Required Skills & Qualifications - Strong expertise in Azure Databricks, Python, and Spark (must-have).  - Solid experience with Azure Data Services, such as: ADLS Gen2 and Azure Data Factory / Azure Synapse.  - Experience with API (REST or equivalent).  - Solid understanding of ETL processes, data modeling, and fundamental data architecture principles.  - Familiarity with CI/CD pipelines and version control (Git, etc.).  - Strong problem-solving skills and structured way of working within an agile delivery model.  - Excellent communication and collaboration skills — you enjoy interacting with business stakeholders and cross-functional teams.  Nice-to-Have Skills - Experience with big data processing and structured streaming in Databricks.  - Familiarity with Declarative Automation Bundles (f.k.a. Databricks Asset Bundles).  - Knowledge of data governance, lineage, and security best practices; experience working with Unity Catalog.  - Exposure to Power BI or similar visualization tools. Codzienne zadania: - Design, develop, and optimize data pipelines and workflows using Azure Databricks and Python.  - Implement and maintain data solutions across the Azure Data Platform (Data Lake, Data Factory, Synapse, etc.).  - Develop and maintain Data Products exposed by Unity Catalog.  - Collaborate closely with business stakeholders to gather requirements and translate them into technical solutions.  - Ensure data quality, security, and compliance across all solutions.  - Integrate data solutions into existing business processes and cross-functional systems.  - Continuously improve performance, cost-efficiency, and scalability of data systems.