Senior Data Platform Engineer
Samba TV via CC
⚲ Warsaw
19 000 - 32 000 PLN (B2B)
Wymagania
- Python
- Scala
- Big Data
- Team work
- CI/CD (nice to have)
- AWS (nice to have)
Opis stanowiska
O projekcie: Correct Context is looking for a Senior Data Platform Engineer (Python or Scala) for Samba TV in Warsaw, Poland. Samba TV tracks streaming and broadcast video across the world with our proprietary data and technology. We are on a mission to fundamentally transform the viewing experience for everyone. Our data enables media companies to connect with audiences for new shows and movies, and enables advertisers to engage viewers and measure reach across all their devices. We have an amazing story with a unique perspective on culture formed by a global footprint of data and AI-driven insights. We offer: - Real big data projects 🚀 - An international team (US, PL, GB) 🌎 - A small, independent team working environment 🧑💻 - High influence on working environment 🎯 - A hands-on environment with real ownership 🛠️ - Hybrid work model in Warsaw, Poland (3 days in the office / 2 days from home) 🏢 - 19,000 - 32,000 PLN net/month B2B 💰 - Free lunch in the office 🍽️ As a Senior Data Platform Engineer, Samba Platform, you'll: - Build scalable data product architecture for both internal and external data consumers - Modernize data frameworks and integrations with Databricks and BigQuery - Upgrade Apache Airflow and reduce developer toil across the platform - Develop and optimize large-scale Spark-based transformations - Implement governance, validation, monitoring, and secure data access patterns - Drive innovation across the tech org and improve cost and performance of cloud-based infrastructure If you don't have all the qualifications, but you think there is a match, let's talk! The recruitment process for theSenior Data Platform Engineer, Samba Platform position has following steps: - Technical survey ~ 10min - Technical screening - 30 min - Technical Interviews with Samba TV team members - 3 or 4 interviews of around 60 min each Wymagania: The candidate must have: - 5+ years of experience in data engineering or a related role - Strong hands-on experience with Databricks, Apache Spark, and BigQuery or Snowflake - Proven experience with modern table formats such as Delta Lake and Iceberg - Hands-on experience with Databricks Unity Catalog, data governance, and metadata management - Experience managing and extending Apache Airflow plus solid AWS knowledge, especially S3 and data-related services - Proficiency in Python and SQL, along with data modeling and lakehouse architecture knowledge - Strong understanding of distributed systems, big data processing, and how teams operate with data Nice to have: - Infrastructure as Code experience, especially Terraform - CI/CD pipelines for data platforms - Experience working in Agile or Scrum environments - AWS, Databricks, or cloud data certifications - Data clean room ecosystem experience across Snowflake, Databricks, or BigQuery - Streaming data pipelines such as Kafka or Flink Codzienne zadania: - Build scalable data product architecture for both internal and external data consumers - Modernize data frameworks and integrations with Databricks and BigQuery - Upgrade Apache Airflow and reduce developer toil across the platform - Develop and optimize large-scale Spark-based transformations - Implement governance, validation, monitoring, and secure data access patterns - Drive innovation across the tech org and improve cost and performance of cloud-based infrastructure