Python Developer (Data Engineering, AI)
emagine Polska
⚲ Warszawa
Wymagania
- Python
- Kubernetes
- Spark
- Data Engineering
- Cloudera
Opis stanowiska
🌍Remote work: fully remote 📑Assignment type: B2B 📕Project language: English ⏳Project length: > 12 months + prolongations ⏰Start: ASAP / 1 month 💻Workload: full time ⚙️Recruitment process: 2 interviews with the client 💼 Industry: IT Services / Digital Consulting 🔍Additional information: After receiving the offer, a background check is carried out (references, criminal record check, etc.) Summary: The core purpose of this role is to design and build AI- and data-driven solutions using Python, while contributing to the development of scalable data platforms and pipelines in hybrid/on-prem environments for Nordic clients. Main Responsibilities: • Develop high-quality Python code for AI- and data-driven applications. • Design, build, and maintain scalable data pipelines (Kubernetes-based). • Work with distributed data processing frameworks (e.g., Spark / PySpark). • Contribute to the design and implementation of data architectures in hybrid/on-prem environments. • Support the development of AI-driven data products. • Collaborate closely with a small, senior engineering team. • Deliver end-to-end solutions across multiple environments. Key Requirements: • Strong hands-on experience in Python (Senior or strong Mid level). • Experience working with AI / Data-related tools or libraries. • Interest in developing further in AI / Data Engineering. • Familiarity with distributed data processing (e.g., Spark or willingness to develop in PySpark). • Experience or exposure to Kubernetes-based environments. • Ability to work across the full development lifecycle. Nice to Have: • Experience with on-prem or hybrid data platforms. • Familiarity with tools such as Cloudera, MinIO. • Experience with Airbyte or dbt. • Knowledge of ETL/ELT processes and modern data stack tools. Other Details: • Team Structure: Small, remote-first team with a flexible culture. • Work Environment: High level of autonomy and ownership. • Growth Opportunity: Strong exposure to Data Engineering and AI projects, with support in developing PySpark and data platform expertise.