NoFluffJobs Praca zdalna Senior New

Databricks Data Engineer/Architect

Spyrosoft

⚲ Wrocław

23 520 - 30 240 PLN (B2B)

Wymagania

  • Data engineering
  • Big data
  • Databricks
  • Data pipelines
  • UNITY
  • Python
  • SQL
  • ETL
  • Data Lake
  • Data warehouses
  • Public cloud
  • AWS
  • GCP
  • Storage
  • Kanban

Opis stanowiska

O projekcie: Join our data engineering team as we develop and scale our enterprise data platform. We are building a high-performance ecosystem designed to manage large-scale datasets, ranging from structured to unstructured formats. In this role, you will help modernize our data infrastructure by implementing cutting-edge storage and processing solutions. You will play a key part in designing how we ingest, process, and govern data to provide reliable insights across the organization. Tech stack:- Databricks (Unity Catalog, Delta Live Tables)- Python (PySpark), SQL- Azure, AWS, or GCP- Data Lakehouse, Data Mesh, Data Marts- DevOps, CI/CD Pipelines- Agile (Scrum/Kanban)About Spyrosoft Spyrosoft is an authentic, cutting-edge software engineering company, established in 2016. In 2021 and 2022, we were among the fastest growing technology companies in Europe, according to the Financial Times. We were founded by a group of tech experts with established backgrounds in software engineering, who created an ‘engineer-to-engineer’ workplace, powered by enthusiasm, fairness and authentic relationships. Having a unique offering, which bridge the gap between technology and business, we specialise in technology solutions for industry 4.0, automotive, geospatial, healthcare & life sciences, employee experience & education and financial services industries. Wymagania: - At least 8 years in Data Engineering, with a minimum of 2 years specifically in Big Data environments.- 4+ years of hands-on experience with Databricks services, including data pipelines and Unity Catalog.- Expert-level skills in Python and SQL.- Strong background in Data Warehousing, ETL, and distributed data processing.- Deep understanding of Data Lakes, Data Warehouses, and Data Mesh concepts.- Experience with at least one public cloud (Azure, AWS, or GCP) and strong design skills for both relational and non-relational storage.- Analytical mindset capable of troubleshooting complex issues in a big data landscape.- Very good verbal and written English B2/C1- Experience working in Agile (Scrum/Kanban) environments. Codzienne zadania: - Design and maintain robust data pipelines and distributed data processing systems using Databricks. - Implement and manage data governance and security frameworks via Unity Catalog. - Develop sophisticated data models (Relational and Non-Relational) to support complex analytical requirements. - Improve the performance and reliability of Big Data workflows and ETL processes. - Work within an Agile environment, integrating DevOps and CI/CD principles into the data lifecycle. - Act as a subject matter expert, guiding the team through complex big data challenges and architectural decisions.