Hadoop Big Data Developer
Red Global
⚲ Kraków
Wymagania
- AWS
- ETL
- Hadoop
- Azure
Opis stanowiska
We’re currently looking for a Hadoop Big Data Developer for a long-term, data engineering project. Key highlights:• Poland-based, remote / hybrid flexibility• 12 months+, long-term cooperation• Strong focus on Hadoop ecosystem, Spark, Hive, and distributed data processing• Hands-on development of large-scale data pipelines and ETL workflows• Experience with cloud platforms (Azure/AWS) and big data tools preferred• Strong programming skills in Java/Scala/Python We’re targeting Big Data Developers — not just analysts — but engineers who design, build, and optimize scalable data processing systems in real production environments. The goal of the project is to design and implement robust, high-performance data pipelines that can process massive volumes of structured and unstructured data, enabling advanced analytics, faster decision-making, and seamless integration with enterprise data platforms. If this sounds relevant, feel free to reply or reach out for details. Best regards,Himanshi