Azure DevOps Engineer (Hybrid – Krakow, 3 days/week)
DCV Technologies
⚲ Kraków
Wymagania
- GitLab
- Terraform
- Helm
- Azure DevOps
- Data Engineering
- Kubernetes/AKS
Opis stanowiska
Location: Krakow, Hybrid (3 days/week)Role Summary We are looking for a senior Azure DevOps Engineer to design, automate, and operate secure, scalable CI/CD pipelines and cloud infrastructure in Azure. The ideal candidate has deep hands-on experience with Terraform, AKS/Kubernetes, Helm, Grafana, and either Jenkins or Azure DevOps/GitLab for pipeline orchestration. Exposure to data engineering stacks (Spark, Databricks, Azure Data Lake) is a plus for enabling analytics and ML workloads in production. Experience: • 8–10 years total IT experience • 4–5 years relevant experience specifically in Azure DevOps practices • 5+ years hands-on with: GitLab, Terraform, Azure, Azure DevOps (ADO), Azure Data Lake, Azure Kubernetes Service (AKS), Spark, Databricks Key Responsibilities • Design, build, and maintain CI/CD pipelines using Azure DevOps (ADO), Jenkins, and GitLab CI, including multi-stage deployments, environment promotions, approvals, and rollback strategies • Implement Infrastructure as Code (IaC) with Terraform (modules, remote state, workspaces) for Azure resources: AKS, VNets, Key Vault, ACR, Azure Data Lake Storage (ADLS), and supporting services • Operate and secure Kubernetes (AKS) clusters: networking, ingress, scaling, node pools, policy, and security baselines • Package and release workloads using Helm; manage chart versioning, values, and environment overlays • Set up end-to-end observability: Grafana dashboards, Prometheus/Alertmanager integrations, Azure Monitor/Log Analytics, actionable alerting tied to SLOs • Integrate and automate deployments for Spark/Databricks jobs and data pipelines, including secrets, configuration, and scheduling • Embed DevSecOps controls across pipelines: image and IaC scanning, SBOM, supply-chain security, secrets management via Azure Key Vault • Troubleshoot production issues, conduct RCAs, and improve reliability, performance, and cost efficiency • Create reusable templates/libraries and documentation; mentor engineers and champion best practices Must-Have Skills CI/CD: • Azure DevOps (ADO) 5+ yrs: YAML pipelines, environments, artifacts, approvals, service connections • GitLab CI 5+ yrs: runners, pipeline-as-code, templates, artifacts • Jenkins (nice to have both Jenkins & ADO/GitLab) Terraform 5+ yrs: • Modules, remote backends, state management, workspaces, policy-as-code (OPA/Conftest), PR validations; strong Azure provider experience Azure 5+ yrs: • Core services (Compute, Networking, Storage), Azure Data Lake (ADLS), Key Vault, ACR, RBAC/IAM Kubernetes / AKS 5+ yrs: • Cluster operations, CNI networking, ingress controllers (Nginx/AGIC), HPA/VPA, Pod Security, PDBs, node pools, taints/tolerations Helm: • Authoring and managing charts, values, release strategies (blue/green, canary) Observability: • Grafana dashboards, Prometheus metrics/alerts; Azure Monitor/Log Analytics, KQL basics Containers & Automation: • Dockerfile best practices (multi-stage, non-root), scripting (Bash/PowerShell; Python/Groovy a plus) Data Engineering Integration: • Spark and Databricks CI/CD integration, job orchestration, configuration, and basic performance awareness Good-to-Have • GitOps (Argo CD/Flux), Helmfile, Kustomize • Security: container/IaC scanning (Trivy/Anchore/Checkov), image signing, supply-chain (SLSA) • Networking/Security: Azure Firewall, Private Link, VNets/Peering, NSGs, WAF • IaC at scale: Terraform Cloud/Enterprise, Terragrunt, Atlantis • Cost optimization for AKS and data workloads (spot nodes, autoscaling, right-sizing) • Service Mesh (Istio/Linkerd), API gateways, Zero Trust patterns Qualifications • Bachelor’s/Master’s in Computer Science, Engineering, or equivalent practical experience • Proven success running production-grade systems on Azure + AKS with automated CI/CD and Terraform • Experience partnering with data engineering teams on Databricks/Spark/ADLS deployments