NoFluffJobs Stacjonarnie Mid New

Data Engineer (Python or Scala) - Hybrid (Warsaw)

Samba TV via CC

⚲ Warsaw

14 000 - 21 000 PLN (B2B)

Wymagania

  • Python
  • Scala
  • Big Data
  • Apache Spark
  • REST API (nice to have)
  • Cloud (nice to have)

Opis stanowiska

O projekcie: Correct Context is looking for a Data Engineer (Python or Scala) for Samba TV in Warsaw, Poland. Samba TV tracks streaming and broadcast video across the world with our proprietary data and technology. We are on a mission to fundamentally transform the viewing experience for everyone. Our data enables media companies to connect with audiences for new shows and movies, and enables advertisers to engage viewers and measure reach across all their devices. We have an amazing story with a unique perspective on culture formed by a global footprint of data and AI-driven insights. We offer: - Real big data projects 🚀  - An international team (US, PL, GB) 🌎  - A small, independent team working environment 🧑‍💻  - High influence on working environment 🎯  - A hands-on environment with real ownership 🛠️  - Hybrid work model in Warsaw, Poland (3 days in the office / 2 days from home) 🏢  - 14,000 - 21,000 PLN net/month B2B 💰  - Free lunch in the office 🍽️  If you don't have all the qualifications, but you think there is a match, let's talk! The recruitment process for theData Engineer position has following steps: - Technical survey ~ 10min  - Technical screening - 30 min  - Technical Interviews with Samba TV team members - 3 or 4 interviews of around 60 min each Wymagania: The candidate must have: - 2+ years of experience in data engineering or a related role  - Strong hands-on experience with Databricks, Apache Spark, and BigQuery or Snowflake  - Proven experience with modern table formats such as Delta Lake and Iceberg  - Hands-on experience with Databricks Unity Catalog, data governance, and metadata management  - Experience managing and extending Apache Airflow plus solid AWS knowledge, especially S3 and data-related services  - Proficiency in Python and SQL, along with data modeling and lakehouse architecture knowledge  - Strong understanding of distributed systems, big data processing, and how teams operate with data  Nice to have: - Great Expectations or similar data quality framework experience  - REST API development supporting secure data access and automation  - Monitoring, logging, and alerting for data workflows  - FinOps mindset and cloud cost optimization experience Codzienne zadania: - Build scalable data product architecture for both internal and external data consumers  - Modernize data frameworks and integrations with Databricks and BigQuery  - Upgrade Apache Airflow and reduce developer toil across the data platform  - Develop and optimize large-scale data transformations using Apache Spark  - Implement governance, validation, monitoring, and secure data access patterns  - Collaborate with data teams, software engineers, and FinOps on platform cost and performance