NoFluffJobs Stacjonarnie Mid New

Big Data Engineer (Java+Spark)

Devire

⚲ Warszawa

18 480 - 30 240 PLN (B2B)

Wymagania

  • Data engineering
  • Spark
  • Java
  • Scala
  • Python
  • DevOps
  • Big data
  • CD tools
  • Microservices
  • Airflow
  • Git
  • Communication skills

Opis stanowiska

O projekcie: Devire IT Outsourcing is a form of collaboration dedicated to IT professionals, based on the principles of self-employment – ​​B2B – and implementing projects for leading clients pursuing innovative and cutting-edge projects. For our Client in the banking industry, we’re looking for an experienced Big Data Engineer with experience in Java and Spark: - B2B contract - Location: Warsaw, hybrid work (3 days per week in the office) - Rate: 110-180 PLN/h - Salary based on B2B contract (via Devire), - Benefits package (medical healthcare, sports membership, etc.), - Possibility to cooperate with an international company with a stable market position, - Hybrid work, - Long-term cooperation, - An active role in an international structure and the opportunity for growth. Wymagania: - 1-5 years of professional experience in Big Data engineering, preferably with a background in banking, - Expertise in Spark and Java/Scala, - Additional knowledge of Python, Java, DevOps, Big Data, CI/CD tools, microservices, APIs, Airflow and GIT, - Strong analytical and problem-solving skills, - A proactive attitude and strong teamwork ability, - Effective communication skills, - The ability to work in a global team across different time zones, - Strong planning and prioritization skills, - Fluency in English (Polish and French are considered an advantage). Codzienne zadania: - Designing and developing solutions for collecting, transforming, and processing large datasets, - Analyzing data from multiple sources and translating findings into technical specifications, - Industrializing data processing for reliability, robustness, efficiency, and resilience, - Developing and implementing data collection, processing jobs, and data mapping using Spark/Scala, - Conducting code reviews and collaborating with team members for continuous improvement, - Contributing to agile feature teams and helping advance agile best practices, - Delivering IT solutions in Scrum cycles (2-3 weeks) with tangible business value, - Recommending and contributing to big data architecture strategies, - Staying up to date with emerging data technologies and implementing best practices, - Supporting applications and troubleshooting production issues as needed, - Ensuring quality and maintainability of code through best development practices (BDD, TDD, clean code, DevOps), - Producing technical documentation, test plans, and project deliverables, - Identifying potential project risks related to costs and deadlines, proposing mitigation strategies when necessary.