NoFluffJobs Praca zdalna Senior New

Senior Data Engineer

DCG

⚲ Poland

20 160 - 21 840 PLN (B2B)

Wymagania

  • Data pipelines
  • Data warehouses
  • Data warehouse
  • Data models
  • Python
  • PySpark
  • Scala
  • Java
  • GDPR
  • SQL
  • Bash
  • Shell
  • Snowflake
  • dbt
  • Kafka
  • Kinesis
  • Airflow
  • AWS
  • Glue
  • Data Lake
  • NoSQL
  • MongoDB
  • Terraform
  • Infrastructure as Code
  • Git
  • GitHub
  • GitHub Actions
  • Data modelling
  • Kanban (nice to have)
  • Communication skills (nice to have)
  • Cloud (nice to have)

Opis stanowiska

O projekcie: Offer: - Private medical care - Co-financing for the sports card - Constant support of dedicated consultant - Employee referral program Wymagania: Requirements: - Strong experience as a Data Engineer (Senior / Full Stack)  - Proficiency in Python, PySpark, SQL, and Bash/Shell scripting  - Experience with Snowflake, dbt, Kafka or Kinesis, Airflow, and AWS Glue  - Experience working with data platforms such as Data Lake, Data Warehouse, or Lakehouse  - Knowledge of NoSQL databases (e.g., MongoDB)  - Hands-on experience with AWS and building cloud-based data platforms  - Experience with Terraform (Infrastructure as Code)  - Proficiency with Git/GitHub and CI/CD tools (e.g., GitHub Actions)  - Strong understanding of modern data architectures (Data Lake, Data Warehouse, Lakehouse, Data Mesh)  - Experience with data modelling, data governance, and cost optimization of data pipelines  - Experience working in Agile environments (Scrum, Kanban)  - Strong communication skills and ability to work with global stakeholders  - English proficiency at B2+ level or higher  Nice to have: - Experience in cloud data platform transformations  - Experience working with large-scale data environments Codzienne zadania: - Design, develop, and maintain scalable data pipelines (ETL/ELT)  - Build and enhance data warehouses and data lake / lakehouse solutions  - Create and deploy data models supporting business and analytical needs  - Write efficient and scalable code (Python / PySpark, optionally Scala or Java)  - End to end ownership of pipelines feeding the Data Platform  - Ensure high data quality, availability, and timeliness  - Collaborate with the Data Governance team (GDPR, CISO, data quality built into pipelines)  - Optimize existing solutions in terms of performance, cost, and stability  - Work closely with product, analytics, and business teams  - Define technical and architectural standards  - Prototype and implement new approaches and technologies  - Collaborate with the Data Product Manager to align data sources and requirements  - Create and deliver the data roadmap for key datasets  - Clearly communicate technical solutions to both technical and non technical stakeholders  - Mentor and support less experienced data engineers  - Represent the Data & Analytics team in cross functional initiatives  - Promote the value of modern data solutions across the organization