NoFluffJobs Stacjonarnie Mid

Data Engineer

Experis Polska

⚲ Wrocław

21 840 - 21 840 PLN (B2B)

Wymagania

  • Data pipelines
  • Azure Databricks
  • Data Lake
  • Databricks
  • Security
  • Python
  • Spark
  • Azure Data
  • Azure Data Factory
  • Qlik
  • IBM MQ
  • API
  • REST API
  • JSON
  • ETL
  • Data modeling
  • CI/CD Pipelines
  • Git
  • Agile (nice to have)
  • Big data (nice to have)
  • Performance tuning (nice to have)
  • Power BI (nice to have)

Opis stanowiska

O projekcie: Data Engineer Location: hybrid, 2-3 days per week in the Wrocław office Salary: 130 PLN/h B2B This role is hands-on, highly collaborative, and ideal for someone who enjoys turning complex data challenges into reliable, production-grade solutions. Offer: - Multisport Card  - Life insurance  - Private healthcare  - PowerYou platform  Opis Firmy Experis to światowy lider rekrutacji specjalistów i kadry zarządzającej w kluczowych obszarach IT. Z nami znajdziesz konkurencyjne oferty zatrudnienia oraz ciekawe projekty IT skierowane zarówno do ekspertów z wieloletnim doświadczeniem, jak i osób, które dopiero zaczynają swoją przygodę w branży IT. Oferujemy rekrutacje menedżerów i wysoko wykwalifikowanych konsultantów z doświadczeniem w branży IT. Experis jest częścią ManpowerGroup i został uznany za jedną z najbardziej etycznych firm na świecie. Wymagania: Required Skills & Qualifications: - Strong expertise in Azure Databricks, Python, and Spark (must-have).  - Solid experience with Azure Data Services, such as:Azure Data Lake and Azure Data Factory  - Experience in ingestion tools like: Qlik Recplicate / MQ / MQCC  - Experience with API development (REST/JSON or equivalent).  - Solid understanding of ETL processes, data modeling, and fundamental data architecture principles.  - Familiarity with CI/CD pipelines and version control (Git, etc.).  - Strong problem‑solving skills and structured way of working within an agile delivery model.  - Excellent communication and collaboration skills - you enjoy interacting with business stakeholders and cross-functional teams.  Nice-to-Have Skills  - Experience with big data processing and performance tuning in Spark.  - Knowledge of data governance, lineage, and security best practices.  - Exposure to Power BI or similar visualization tools. Codzienne zadania: - Design, develop, and optimize data pipelines and workflows using Azure Databricks and Python.  - Implement and maintain data solutions across the Azure Data Platform (Data Lake, Data Factory, Databricks, etc.).  - Develop and maintain APIs supporting data ingestion, data sharing, and interoperability.  - Collaborate closely with business stakeholders to gather requirements and translate them into technical solutions.  - Ensure data quality, security, and compliance across all solutions.  - Integrate data solutions into existing business processes and cross-functional systems.  - Continuously improve performance, cost-efficiency, and scalability of data systems.