JustJoin.IT Hybrydowo Senior New

Full-Stack Machine Learning Engineer

emagine Polska

⚲ Stockholm Metropolitan Area

Wymagania

  • training
  • maintenance
  • Artificial Intelligence (AI)
  • DataStage (ETL)
  • SQL
  • Python
  • Data modeling
  • Cloud
  • ETL
  • CI/CD

Opis stanowiska

We are seeking a Full‑Stack Machine Learning Engineer with a strong foundation in data engineering to build and operate the data and ML infrastructure that powers our client’s AI‑driven products. This role blends advanced Data Engineering and Machine Learning Engineering responsibilities, with a heavy focus on delivering reliable, scalable, and high‑quality data pipeline foundations on Google Cloud Platform (GCP). Start: ASAPLength: 6 months with possible extension Location: Must be in Stockholm office 4 days per week You will design and develop robust data pipelines, implement transformation layers using dbt, build optimized data models for analytics and ML workloads, and productionize models with Vertex AI using modern MLOps practices. You’ll collaborate closely with data scientists, machine learning engineers to create end‑to‑end systems that are production‑ready, maintainable, and cost‑efficient. Key Responsibilities Build and maintain scalable data pipelines on GCP using DBT, BigQuery, Dataflow, Pub/Sub, and Cloud Composer. Perform data wrangling, cleaning, and preprocessing to prepare high‑quality datasets for analytics and ML. Develop reliable data models and transformation workflows using dbt. Ensure data quality, monitoring, and performance across datasets and pipelines. Build and deploy ML workflows using Vertex AI, from training to production. Collaborate with cross‑functional teams to design scalable data and ML architectures. Collaborate with Data Scientists, ML Engineers, and Product Managers to enhance the performance and reliability of the underlying Data and AI solutions. Implement MLOps best practices for seamless deployment, monitoring, and maintenance of ML models. Preferred Skills & Experience Strong experience with GCP tools (BigQuery, Dataflow, Pub/Sub, GCS, Cloud Composer). Hands‑on with dbt and strong SQL skills. Knowledge of ELT/ETL patterns, data modeling, and data quality practices. Experience with Vertex AI (training, pipelines, deployment) Proficiency in Python and building production ML pipelines. Proficiency of MLOps fundamentals and basic CI/CD. Experience with Docker and cloud‑native development. Good understanding of distributed systems and scalable architecture.