Expert Data Engineer (IT Developer)
B2Bnetwork
⚲ Gdańsk, Warszawa, Łódź
Wymagania
- Scala
- Apache Spark
- Python
- Linux
- SQL
- AWS
Opis stanowiska
Your responsibilities • Design and build high-performance, distributed Big Data pipelines • Process, transform, and enrich large-scale datasets using Apache Spark • Develop solutions supporting data analytics, reporting, and business decision-making • Collaborate with data analysts and business stakeholders to build data models • Maintain and develop solutions based on Hadoop and Big Data technologies • Automate data processing workflows • Work closely with QA and other technical teams Requirements • Strong knowledge of Scala and Apache Spark • Experience working with Python • Hands-on experience with the Hadoop ecosystem: YARN, EMR, Sqoop, Hive, Oozie • Strong knowledge of SQL • Practical experience with Linux systems • Experience working with AWS cloud • Good command of English (working in an international environment) Nice to have • Experience in migrating data from on-premise to cloud environments • Experience working in Agile / SAFe frameworks • Experience handling large-scale data and near real-time processing systems About the project You will join a team developing a central data platform for a large enterprise. The project includes: • Building and developing distributed Big Data processing systems • Migrating solutions to public cloud (AWS) • Creating components supporting advanced analytics and data visualization • Working in a SAFe environment with regular PI Planning sessions What we offer • Participation in large-scale projects with real business impact • Work with a modern technology stack (Scala, Spark, Python, AWS, Hadoop) • Opportunity to take part in a cloud transformation of a large organization • Flexible hybrid work model (approx. 1 day per week in the office) • Occasional team meetings in Tricity (PI Planning) Additional information • Work model: hybrid (1 day per week in the office) • Office locations: Tricity / Warsaw / Łódź