NoFluffJobs Praca zdalna Senior New

Senior BigData Engineer (Databricks)

SoftServe

⚲ Remote

17 000 - 24 000 PLN (B2B)

Wymagania

  • Databricks
  • Azure
  • SQL
  • Python
  • Data engineering
  • UNITY
  • Azure Data Factory
  • PySpark
  • ADLS
  • Azure DevOps
  • Microsoft Fabric

Opis stanowiska

O projekcie: WE ARE SoftServe is a global digital solutions company headquartered in Austin, Texas, founded in 1993. Our associates are currently working on 2,000+ projects with clients across North America, EMEA, APAC, and LATAM. We are about people who create bold things, make a difference, have fun, and love their work. Big Data & Analytics Center of Excellence is the data consulting and data engineering branch at SoftServe. Starting as a small team back in 2013, it has grown into a community of hundreds of Data Engineers and Architects who design and deliver end-to-end Data & Analytics solutions — from strategy and technical design to proof of concept and large-scale implementation. We work with clients across Healthcare, Finance, Manufacturing, Retail, and Energy industries. We hold top-level partnership statuses with major cloud providers. TOGETHER WE WILL - Care for your and your family’s wellness with a health insurance package - Offer wide career opportunities, challenging projects, modern technologies, and a clear career path through SoftServe’s People Excellence program - Provide access to 11,300+ learning solutions via SoftServe University and Udemy Business - Support professional growth through certifications from leading providers such as Google and AWS - Recognize top contributors and creative minds through the annual SoftServe Awards - Enable recognition through the Customer Hero Program by creating exceptional customer experiences - Support scaling expertise through participation in the Mentoring Program - Contribute to the success of clients ranging from startups and ISVs to Enterprise and Fortune 500 companies Wymagania: IF YOU ARE - A Data Engineer with over 5 years of experience in designing data models and scalable batch, and streaming ETL pipelines on the Databricks Lakehouse - Advanced in PySpark, Python, and SQL, as well as performance tuning and cost‐optimization - Confident in Databricks clusters and SQL Warehouses - Proficient with Unity Catalog governance: fine‐grained access, ABAC policies, lineage, and cross‐workspace sharing - Experienced in delivering end-to-end solutions using Databricks Asset Bundles, deploying jobs, pipelines, dashboards, and model serving via CI/CD - Skilled in designing bronze–silver–gold Lakehouse architectures using Delta Lake and exposing data through Databricks SQL and BI tools like Power BI - Communicative in written and verbal formats, successful in collaboration with technical and business stakeholders in English - Accustomed to modern Databricks AI/BI capabilities: Dashboards, Metric Views, MLflow, Genie, agents (as an advantage) - Familiar with Azure data services: ADF, Synapse, ADLS, Azure DevOps, Microsoft Fabric (a plus) Codzienne zadania: - Be part of a team of data-focused Engineers dedicated to continuous learning, improvement, and knowledge sharing every day - Work with a cutting-edge technology stack, including pioneering services from major cloud providers that are at the forefront of innovation - Engage with customers of diverse backgrounds, ranging from large global corporations to emerging start-ups preparing to launch their first product - Be involved in the entire project lifecycle, from initial design and proof of concept (PoC) to minimum viable product (MVP) development and full-scale implementation