NoFluffJobs Praca zdalna Mid New

DevOps Engineer

apreel Sp. z o.o.

⚲ Warsaw, Wrocław

24 360 - 27 720 PLN (B2B)

Wymagania

  • DevOps
  • AI
  • CI/CD Pipelines
  • Machine learning
  • Cloud
  • Security
  • High availability
  • Jenkins
  • GitHub Actions
  • Kubernetes
  • Docker
  • Terraform
  • Cloud platform
  • Azure
  • Google Cloud
  • MLflow
  • Kubeflow
  • Airflow
  • Python
  • Bash
  • YAML
  • Automated testing
  • Grafana
  • Degree
  • Communication skills
  • AWS
  • TensorFlow
  • PyTorch
  • scikit-learn

Opis stanowiska

O projekcie: Offer:   - Location: 100% remote / equipment pick up in Warsaw OR Wrocław   - Start: ASAP   - Employment: B2B contract with apreel   - Rate: up to 165 zł/h + VAT Wymagania: Skills and attributes for success:  - Expertise in DevOps tools such as Jenkins, GitHub Actions, Kubernetes, Docker, and Terraform.   - Strong understanding of cloud platforms (AWS, Azure, Google Cloud) and their AI/ML services.   - Familiarity with pipeline optimization tools like MLflow, Kubeflow, or Airflow tailored for AI workflows.   - Proficiency in scripting and programming languages (Python, Bash, YAML, etc.).   - Ability to implement automated testing frameworks for validating AI models and workflows.   - Experience in creating secure environments specific to AI-driven systems.   - Proven capability to integrate AI-powered monitoring tools like AIOps, Grafana with ML plugins, or custom AI-based diagnostics.   To qualify for the role, you must have: - Bachelor's degree in Engineering, IT, Science, other Technical qualification   - Demonstrable experience with DevOps tools and frameworks commonly used in AI/ML workflows.   - 5+ years in a corporate IT environment in a related position.   - Strong communication skills and ability to collaborate with global teams.   Ideally, you’ll also have  - Certifications in DevOps or AI/ML (e.g., Kubernetes, AWS Machine Learning Specialty).   - Experience working with AIOps platforms to enhance operational efficiency.   - Knowledge of specialized data pipeline automation tools designed for machine learning contexts.   - Hands-on experience with AI/ML frameworks like TensorFlow, PyTorch, or Scikit-learn, especially in production environments. Codzienne zadania: - Design and maintain CI/CD pipelines incorporating AI/ML tools and frameworks.   - Collaborate with AI teams to deploy and scale machine learning models into production environments.   - Integrate AI-based monitoring, alerting, and analytics tools to enhance pipeline visibility and enable predictive analysis.   - Optimize infrastructure resource utilization for AI applications across cloud, on-premises, or hybrid environments.   - Ensure robust security practices within AI-driven pipeline ecosystems.   - Troubleshoot pipeline and infrastructure issues, particularly those involving AI components, and guarantee high availability.   - Monitor system performance and implement improvements using AI-enhanced tools.