JustJoin.IT Praca zdalna Mid New

MLOps Engineer

Devapo

⚲ Warszawa

Wymagania

  • ML
  • LLM
  • CI/CD
  • Python

Opis stanowiska

We are looking for an MLOps Engineer who knows that a model is only as good as the pipeline behind it — someone who has actually kept ML systems running in production, not just deployed a tutorial to a notebook. You will work on international projects for clients in banking, insurance, and telco (US, Netherlands, UK), building the infrastructure that makes AI reliable at scale.   About Devapo   At Devapo, we focus on continuous self-development and acquiring new knowledge. If you are a fast learner, want to participate in international projects, are a team player, and can work independently - join us! We provide our clients with more than just code - we want to equip them with tools that allow their businesses to flourish. Our clients’ success is our success, which is why we ensure that everyone who creates Devapo has a long-term goal in mind. At Devapo, you’ll have the opportunity to discuss your challenges and solutions with our team of experts, including experienced architects who are always ready to share their knowledge and guide you through complex technical decisions.   Key Responsibilities   ● Designing, building, and maintaining CI/CD pipelines for ML model training, evaluation, and deployment  ● Managing model lifecycle end-to-end — from experiment tracking and versioning to production serving and monitoring  ● Setting up and maintaining infrastructure for ML workloads on cloud platforms (AWS, Azure, or GCP)  ● Implementing monitoring, alerting, and observability for deployed models — detecting drift, latency issues, and quality degradation  ● Building and managing feature stores, data pipelines, and ETL processes that feed ML models  ● Containerizing and orchestrating ML services using Docker and Kubernetes  ● Collaborating with data scientists and ML engineers to streamline the path from experimentation to production  ● Implementing Infrastructure as Code (Terraform, Pulumi, or CloudFormation) for reproducible ML environments  ● Defining and enforcing MLOps best practices, standards, and documentation across teams    Requirements   ● Proven experience running ML/AI systems in production — you’ve dealt with model drift, pipeline failures, and scaling issues in real environments ● Strong Python skills and hands-on experience with MLOps tooling: MLflow, Kubeflow, Airflow, or similar ● Solid experience with containerization (Docker) and orchestration (Kubernetes) in production settings ● Working knowledge of at least one major cloud platform (AWS SageMaker, Azure ML, or GCP Vertex AI) and its ML services ● Experience with CI/CD tools (GitHub Actions, GitLab CI, Jenkins, or Azure DevOps) applied to ML workflows ● Infrastructure as Code experience (Terraform, Pulumi, or CloudFormation) ● Understanding of ML fundamentals — you don’t need to build models, but you need to understand what makes them break in production ● Experience with monitoring and observability tools (Prometheus, Grafana, Datadog, or similar) ● English B2+ — client-facing role, calls and written communication included   Nice to Have   ● Experience with LLM serving infrastructure (vLLM, TGI, Triton Inference Server)  ● Databricks, Azure AI Foundry, or AWS Bedrock  ● GPU infrastructure management and cost optimization  ● Kafka or streaming pipelines for real-time inference  ● Experience with vector databases (Qdrant, Weaviate, Pinecone, pgvector) in production RAG setups  ● Familiarity with AI governance and regulatory context (EU AI Act, GDPR)   What We Offer   ● Certifications and training funded  ● Private medical care (Medicover)  ● Multisport card  ● English language classes  ● Flexible working hours  ● Team meetups and integration events  ● Referral bonus