Oferty pracy
Filtry
T Hub - Senior PHP Developer
T-Mobile
⚲ Warszawa, Mokotów
Wymagania
- PHPStorm
- JavaScript
- Python
- Selenium
- GitLab
Nasze wymagania: Several years (4+) of experience working in IT industry Advanced experience with programming and application design Good skills in integrating systems and applications. Good understanding of Cloud systems, DevOps concepts and tooling Willingness to learn and teach new technologies. Being self-motivated and organized Solid communication skills (oral and written), and willingness to develop them further. PHP storm and object -based programming JavaScript, C+, Python Selenium GitLab HTML MySQL/ MARIA DB Docker English knowledge - upper intermediate - spoken and written (B2+) German language knowledge would be treated as an advantage O projekcie: In this Project, we pioneer the development of innovative applications serving Deutsche Telekom’s end customers, internal technical staff, and service teams in Germany. Our mission is to deliver groundbreaking solutions that set new standards of excellence in customer experience and operational efficiency. We believe in innovation, integrity, and inclusivity. If you are passionate about solving complex challenges and driving impactful automation, this Project is where you can grow and thrive. As a Senior Software Developer, you will lead the design and implementation of scalable software solutions that automate and streamline business processes across multiple platforms, ensuring high quality, maintainability, and cross-team collaboration. Working closely with product owners and stakeholders, you will analyze requirements, drive technical decisions, and mentor colleagues to foster continuous improvement and innovation. Zakres obowiązków: Lead the development of robust front-end applications and backend automation processes supporting Deutsche Telekom Technik. Design and build front-end assistants (robots), system overlays, and interfaces (e.g., SOAP, REST) tailored to customer needs. Translate requirements from product managers and steering teams into maintainable, scalable software solutions. Collaborate with internal stakeholders to ensure alignment and gather feedback. Apply agreed programming languages and tools to develop cross-platform automation and integration solutions. Approach complex technical challenges with creative problem-solving and innovative technologies. Actively contribute to research and development initiatives to improve processes and tools. Oferujemy: A dynamic environment where you’ll consecutively lead your contributions across diverse projects. Opportunity to become an expert in some of the most exciting cutting-edge technologies like Conversational AI platforms and VoIP solutions. A collaborative team setup that supports your growth in a customer-facing technical consulting role. Room for individual technological exploration while shaping innovative enterprise solutions.
Software Engineer (Generative AI) | f/m/d
ERGO Technology & Services S.A.
⚲ Warsaw, Gdansk
Wymagania
- Python
- Azure OpenAI
- Azure Machine Learning
- Cognitive Services
- Kubernetes
As a Software Engineer specializing in Generative AI projects, you will be involved in developing and deploying cutting-edge AI solutions within the Azure ecosystem. Collaborating closely with Data Scientists, AI Engineers, and DevOps, you will integrate AI-driven capabilities into our business applications, ensuring they are scalable, secure, high-performing and high quality. Our mission is to harness Artificial Intelligence to revolutionize enterprise processes, embedding AI deeply into our business framework. Your expertise in Azure services and AI/ML technologies will be crucial in realizing this vision. How you will get the job done - creating, developing, and deploying AI/ML-based applications using Azure services - developing and optimizing Large Language Model (LLM) based solutions tailored to real-world business cases - building and maintaining APIs and microservices to efficiently serve AI/ML-based solutions to customers - streamlining model operationalization and developing model inference backend - ensuring all AI solutions adhere to security, development and governance standards - writing and maintaining unit tests, ensuring robust test coverage, and enforcing best coding practices (clean code, maintainability, and performance optimization - creating clear, concise technical documentation for developed components - working in Agile/Scrum teams to deliver high-quality applications on schedule - working closely with Data Scientists and AI Engineers to integrate AI/L into business solutions
Programista / Programistka Python Middle (MLOps)
B2B.NET S.A.
⚲ Warszawa
16 400–18 400 zł netto (+ VAT) / mies.
Wymagania
- Python
- Debian
- RHEL
- Kubernetes
- GCP
- Jenkins
- GitLab CI
Nasze wymagania: Python (tworzenie i wdrażanie aplikacji) Linux (Debian / RHEL – biegła znajomość) Kubernetes (budowa aplikacji osadzonych na K8S) GCP (wdrażanie rozwiązań chmurowych) samodzielność w rozwiązywaniu problemów znajomość metodyki DevOps lub MLOps chęć rozwoju w obszarze ML/LLM Mile widziane: MLOps / LLMOps / BigData CI/CD (Jenkins, GitLab CI) hurtownie danych i procesy ETL O projekcie: Kandydat dołącza do zespołu MLOps. Projekt obejmuje budowę i utrzymanie aplikacji wykorzystujących modele Machine Learning oraz LLM, działających w środowisku chmurowym GCP na Kubernetesie. Planowany start: 04.05.2026, współpraca powyżej 12 miesięcy. Zakres obowiązków: projektujesz, implementujesz aplikacje wykorzystujące modele Machine Learning i LLM konfigurujesz aplikacje działające w środowisku Linux i K8S w chmurze GCP rozwiązujesz problemy z działaniem wdrożonych rozwiązań bierzesz aktywny udział w pracach projektowych zespołu MLOps
Manager - IT Delivery Lead Cloud Operations
Arla Global Shared Services Sp. z o.o.
⚲ Gdańsk
Nasze wymagania: Proven experience leading teams in IT Infrastructure or Cloud Operations, including recruitment, coaching, and performance management. Strong expertise in Microsoft Azure and core operational areas such as OS patching, DR, networking, load balancing, and security. Solid stakeholder and vendor management experience, including contract governance, SLA management, and service improvement. The ability to translate business needs into technical outcomes within an agile, fast-paced environment. Excellent communication skills in English, a Bachelor’s or Master’s degree in a relevant field, and familiarity with ITIL (certifications are a plus). O projekcie: Are you a passionate leader ready to make an impact in a global enterprise environment? Join Arla’s IT organization in Gdańsk and take the lead in driving innovation, reliability, and excellence across our cloud-based data center operations. Zakres obowiązków: Lead, coach, and develop a high-performing CloudOps team, fostering accountability, growth, and teamwork. Own the delivery and performance of Cloud Operations, ensuring availability, security, and cost efficiency across our cloud based environment. Drive improvement initiatives / projects using agile methodologies (Scrum, DevOps, Kanban) and ensure strong vendor performance through effective governance. Oversee key operational processes such as OS patching, vulnerability management, disaster recovery, incident handling, capacity planning, and compliance. Build strong partnerships across IT and business units, driving alignment, transparency, and proactive risk management. Oferujemy: Structured onboarding and training with industry experts. Financial support for ongoing education and certifications. Flexible working hours and hybrid work options. Employee Assistance Program (legal, psychological, health, and financial). Home office kit (monitor, laptop, keyboard, mouse, headset, webcam). Referral program with rewards for bringing in great talent.
Manager - IT Delivery Lead Cloud Operations
Arla
⚲ Gdańsk
Wymagania
- Microsoft Azure
- Azure DevOps
- Microsoft 365
Are you a passionate leader ready to make an impact in a global enterprise environment? Join Arla’s IT organization in Gdańsk and take the lead in driving innovation, reliability, and excellence across our cloud-based data center operations. About the role We’re looking for a People Manager and Delivery Lead to head our Cloud Operations team in Gdańsk, Poland. In this role, you will be responsible for setting the strategic direction for managing our hybrid cloud data center infrastructure, with a strong focus on Microsoft Azure. You’ll manage a talented team of CloudOps Engineers and collaborate closely with IT Delivery Leads, Architects, Specialists, vendors, and your Head of Product to deliver secure, reliable, and cost-efficient services that create tangible business value for Arla. How you will make an impact You will lead and inspire your team to deliver high-quality cloud operations that ensure stability, security, and performance across Arla’s infrastructure. With a focus on continuous improvement and collaboration, you’ll help shape our cloud strategy and operational excellence. You will • Lead, coach, and develop a high-performing CloudOps team, fostering accountability, growth, and teamwork. • Own the delivery and performance of Cloud Operations, ensuring availability, security, and cost efficiency across our cloud based environment. • Drive improvement initiatives / projects using agile methodologies (Scrum, DevOps, Kanban) and ensure strong vendor performance through effective governance. • Oversee key operational processes such as OS patching, vulnerability management, disaster recovery, incident handling, capacity planning, and compliance. • Build strong partnerships across IT and business units, driving alignment, transparency, and proactive risk management. What will make you successful You are a skilled leader who combines strategic insight with technical understanding. You know how to empower teams, manage complex environments, and ensure cloud services deliver real business value. You are familiar with working in an enterprise environment and managing its dependencies. You have a drive for getting things done, and continue to improve your area of responsibility. You have • Proven experience leading teams in IT Infrastructure or Cloud Operations, including recruitment, coaching, and performance management. • Strong expertise in Microsoft Azure and core operational areas such as OS patching, DR, networking, load balancing, and security. • Solid stakeholder and vendor management experience, including contract governance, SLA management, and service improvement. • The ability to translate business needs into technical outcomes within an agile, fast-paced environment. • Excellent communication skills in English, a Bachelor’s or Master’s degree in a relevant field, and familiarity with ITIL (certifications are a plus). What do we offer? We value our IT managers and offer • Structured onboarding and training with industry experts. • Financial support for ongoing education and certifications. • Flexible working hours and hybrid work options. • Employee Assistance Program (legal, psychological, health, and financial). • Home office kit (monitor, laptop, keyboard, mouse, headset, webcam). • Referral program with rewards for bringing in great talent. Shape the Future of Dairy Arla is a global leader in the dairy industry, committed to enabling good food choices that make life better, providing people with natural, sustainable nourishment, while taking care at every step to ensure Arla is a choice they can feel good about. If you are looking to shape the future with an ambitious global cooperative that truly cares about your growth, where everyone feels valued and empowered, and collaboration is the core of culture, Arla is a choice you can feel good about.
MLOps Engineer
IN4GE sp. z o.o.
⚲ Kraków, Warszawa, Wrocław
145–185 zł netto (+ VAT) / godz.
Wymagania
- Python
- Docker
- Kubernetes
- AWS
- Google Cloud Platform
- Microsoft Azure
- MLflow
- Kubeflow
- Airflow
- Terraform
- GitHub Actions
- GitLab CI/CD
- Jenkins
Nasze wymagania: Minimum 2–4 lata doświadczenia komercyjnego w obszarze MLOps, DevOps, ML Engineering lub pokrewnym. Znajomość języka Python. Doświadczenie z Docker, Kubernetes oraz narzędziami CI/CD. Znajomość chmury publicznej, takiej jak AWS, GCP lub Azure. Zrozumienie lifecycle modeli ML oraz zagadnień związanych z ich wdrażaniem. Doświadczenie z monitoringiem, automatyzacją oraz środowiskami produkcyjnymi. Znajomość języka angielskiego na poziomie minimum B2. Mile widziane: Doświadczenie z MLflow, Kubeflow, Airflow lub pokrewnymi narzędziami. Znajomość Terraform, GitHub Actions, GitLab CI/CD lub Jenkins. Doświadczenie z LLMOps oraz wdrażaniem rozwiązań Generative AI. Znajomość zagadnień związanych z bezpieczeństwem danych i modeli. O projekcie: Dla naszego Klienta poszukujemy osoby do współpracy na stanowisku MLOps Engineer przy projektach realizowanych w nowoczesnym środowisku technologicznym. To propozycja dla osoby, która chce wspierać produkcyjne wykorzystanie modeli Machine Learning i AI, łącząc kompetencje z obszaru danych, automatyzacji, infrastruktury oraz wdrożeń. Zakres współpracy obejmuje budowę i rozwój środowisk wspierających lifecycle modeli, automatyzację pipeline’ów oraz monitorowanie jakości wdrażanych rozwiązań. Poszukujemy osoby, która dobrze odnajduje się na styku ML, DevOps i chmury i chce mieć realny wpływ na stabilność, skalowalność oraz efektywność środowisk AI. Zakres obowiązków: Tworzenie i rozwój środowisk wspierających wdrażanie oraz utrzymanie modeli ML i AI. Automatyzacja pipeline’ów związanych z trenowaniem, wdrażaniem i monitorowaniem modeli. Współpraca z zespołami ML, Data Science, DevOps oraz developerskimi. Monitorowanie jakości modeli, danych oraz wydajności środowisk produkcyjnych. Rozwój rozwiązań wspierających versioning modeli i danych. Udział w budowie procesów CI/CD dla rozwiązań AI i ML. Optymalizacja infrastruktury oraz wspieranie bezpieczeństwa i skalowalności środowisk. Oferujemy: Współpracę w modelu B2B. Stawkę w przedziale 145–185 PLN/h netto + VAT, w zależności od doświadczenia i dopasowania do projektu. Możliwość udziału w nowoczesnych projektach realizowanych w środowisku międzynarodowym. Realny wpływ na rozwój środowisk wspierających AI i ML. Dostęp do aktualnych narzędzi, technologii i dobrych praktyk. Model współpracy hybrydowy w jednej z lokalizacji: Wrocław, Kraków, Warszawa.
Inżynier DevOps
e-point SA
⚲ Warszawa
100 - 120 PLN/h netto (B2B)
Wymagania
- Kubernetes
- Linux
- Terraform
- Docker
- GitLab
- Python
Opis projektu Nasz Klient to państwowa jednostka o znaczeniu strategicznym, odpowiedzialna za utrzymanie i rozwój kluczowej infrastruktury w sektorze lotnictwa cywilnego. Instytucja ta pełni nadrzędną rolę w zapewnianiu ciągłości i bezpieczeństwa operacji transportowych na skalę krajową, działając w oparciu o rygorystyczne normy publiczne oraz międzynarodowe standardy regulacyjne. Zadania: • Automatyzacja środowisk developerskich oraz czynności administracyjnych, • Konfiguracja środowisk projektowych, serwerów i narzędzi, • Projektowanie, wdrażanie i utrzymanie procesów CI/CD, • Monitorowanie i optymalizacja wydajności systemów (software), • Utrzymanie ciągłości działania systemów i aplikacji biznesowych, • Ocena i wdrażanie nowych narzędzi oraz standardów DevOps, • Tworzenie i aktualizacja dokumentacji technicznej, • Współpraca z zespołami developerskimi i QA, • Umiejętność pracy w zespole i dzielenia się wiedzą techniczną. Wymagania: • Min. 3 lata doświadczenia na stanowisku DevOps, • Znajomość metodyk Agile/Scrum, • Umiejętność pisania skryptów (np. Bash, Python), • Bardzo dobra znajomość Linux (RHEL), • Znajomość Kubernetes (np. AKS, Openshift/OKD) i umiejętność zarządzania wdrożeniami (Helm, Kustomize, ArgoCD), • Znajomość sieci komputerowych (TCP/IP, DNS, load balancing, firewall, iptables), • Znajomość Git i pracy z repozytoriami, • Praktyczna znajomość: Docker, Terraform, Puppet/Ansible/Chef, • Znajomość narzędzi monitoringu: Zabbix, Prometheus, Grafana. • Podstawowa znajomość Azure, • Znajomość zagadnień bezpieczeństwa usług w chmurze i mikrousługach, • Umiejętność pracy z CI/CD (GitLab CI, Github Actions, Jenkins) • Umiejętność projektowania rozwiązań w dużych systemach informatycznych. • Opcjonalna znajomość języków: JavaScript, Java, Groovy. • Znajomość języka angielskiego na poziomie umożliwiającym pracę z dokumentacją techniczną, • Mile widziane wykształcenie wyższe techniczne (informatyka, automatyka, elektronika, teleinformatyka lub pokrewne). • Biegłe posługiwanie się językiem polskim. Co oferujemy? • Pracę zdalną z możliwościa przyjazdów co jakiś czas do Warszawy do siedziby Klienta. • Pracę na naszym sprzęcie. • Płaską strukturę zarządzania. • Samoorganizujące się zespoły. • Dużo przestrzeni na własną inicjatywę. • Możliwość delegacji na terenie Unii Europejskiej lub Wielkiej Brytanii.
Inżynier Cloud
Integral Solutions
⚲ Warszawa
11 088 - 16 632 PLN (B2B)
Wymagania
- GCP
- SAS
- Bash
- PowerShell
- Python
- CA
- Linux
- Windows Server
- Ansible
- DevOps
- Infrastructure as Code
- Terraform
- ARM
- CloudFormation
- Docker
- Kubernetes
O projekcie: - Lokalizacja: Polska, Warszawa - Tryb pracy: Hybrydowo, 1 dzień z biura 4 dni zdalnie Kogo szukamy Dołącz do nas jako Senior Cloud Engineer i wykorzystaj swoje wieloletnie doświadczenie w GCP oraz SAS Viya, tworząc nowoczesne i bezpieczne środowiska chmurowe. W tej roli będziesz rozwijać rozwiązania oparte na automatyzacji, konteneryzacji, CI/CD i Infrastructure as Code, wykorzystując jednocześnie narzędzia bezpieczeństwa oraz najlepsze praktyki DevOps. Jeśli chcesz realnie wpływać na architekturę systemów, podnosić standardy bezpieczeństwa i pracować z najnowszymi technologiami w świecie chmury — czekamy właśnie na Ciebie. Co oferujemy - Pracę na kontrakcie B2B - Dostęp do systemu myBenefit, w tym prywatne ubezpieczenie medyczne, Multisport i wiele innych - Prezenty na różne okazje - Wydarzenia firmowe Wymagania: - 5 letnie doświadczenie w pracy z publiczną chmurą GCP oraz oferowanymi przez nią usługami, - Praktyczna znajomość technologii SAS Viya na poziomie co najmniej 5'letniego doświadczenia. - Wiedza z zakresu zabezpieczeń środowisk chmurowych, - Orientacja w narzędziach i mechanizmach bezpieczeństwa sieci w chmurze publicznej (firewalls, VPNs, certificates, policies), - Umiejętność tworzenia skryptów, np. w Bash, PowerShell czy Pythonie. - Znajomość rozwiązań do zarządzania tożsamością - identity management, - Podstawowa wiedza administracyjna dotycząca systemów Linux lub Windows Server, - Znajomość narzędzia Ansible, - Umiejętność projektowania i wdrażania pipeline’ów CI/CD z wykorzystaniem narzędzi DevOps oraz rozwiązań bezpieczeństwa, - Doświadczenie w korzystaniu z podejścia Infrastructure as Code (np. Terraform, ARM Templates, CloudFormation), - Praktyczne umiejętności w zakresie wirtualizacji, konteneryzacji oraz obsługi Dockera i Kubernetes, - Znajomość systemów kontroli wersji i narzędzi do zarządzania kodem źródłowym,
Engineering Manager
bunch
Wymagania
- Kubernetes
- AWS
- TypeScript
- Node.js
- React
About bunch bunch is building the backbone of private markets, combining exceptional expertise, operational excellence, and frictionless technology. The platform enables funds and private investors to set up and manage their investment entities seamlessly and securely. To expand our core team, we are hiring an Engineering Manager for our Platform Team who wants to play a pivotal role in growing our product development team and transforming the private markets industry. Your primary responsibilities will be ensuring our engineering output delivers tangible value, as well as technical and architectural direction and team resources. You will be involved in shaping the future of the tech team as we grow to the next stage. We are focusing on Product Excellence and building sustainable tech processes. How the role contributes to the company vision - You will be at the core of building the technology that powers bunch. We aim to build something meaningful with a real-world impact - You will lead a platform team and have full accountability for delivering reliable, secure, and scalable infrastructure and tooling that enable product squads to solve real customer problems. What you will be doing - Lead, grow, and support the platform team through coaching, clear expectations, and a culture of ownership, reliability, and continuous improvement. - Own the platform strategy and roadmap, prioritising initiatives, managing team capacity, and ensuring strong execution and impact. - Provide technical direction for our AWS- and Kubernetes-based platform, guiding architectural decisions that balance speed, reliability, security, and cost. - Oversee cloud infrastructure and IaC practices, ensuring the team delivers secure, resilient, well-managed AWS environments. - Drive improvements to CI/CD and developer experience by setting direction, aligning stakeholders, and removing friction for product teams. - Champion observability and reliability, ensuring robust logging, metrics, tracing, and effective incident response practices. - Partner closely with product, engineering, security, and operations experts in the company to align standards, resolve cross-team issues, and support smooth service integration. - Maintain platform quality by steering tech-debt reduction, documentation, and alignment with security and compliance expectations. Our tech stack We strive to keep our technology stack as simple and efficient as possible. - Frontend & Backend: TypeScript on both the frontend (Svelte but moving to React) and the backend (Node.js with Nest.js) - Database: MySQL - Infrastructure: Kubernetes (K8S) on top of AWS - Source Control & CI/CD: GitHub - Third-Party Applications: - FusionAuth for authentication - Retool for low-code internal frontend solutions - Twilio and SendGrid for sending SMS and emails - Tally for creating custom forms About bunch bunch is Europe’s leading tech-enabled fund administrator for VC, PE, and alternative assets. By combining AI-powered automation with deep domain expertise, we provide a single source of truth that replaces outdated, fragmented processes, and enables clients to master the entire fund lifecycle. Private markets are experiencing unprecedented growth; with alternative assets projected to reach $40 trillion by the end of the decade. To power this growth, we have raised €22.8 million to date—including our $15.5M Series A in July 2024—and are accelerating our mission to build the backbone of private markets in Europe. Founded in 2021 and headquartered in Berlin, bunch has expanded to Amsterdam, London, and Luxembourg, now supporting over 500 investment structures, 150 fund and asset managers, and more than 10,000 investors. As we prepare for our next stage of growth, we are looking for ambitious talent to continue redefining this financial category.
DevOps Engineer (with German)
Andersen
⚲ Berlin, Dresden
Wymagania
- AWS
- DevOps
- CI/CD
- Bash
- Docker
- Azure
- Terraform
- Kubernetes
- Python
Andersen is hiring a DevOps Engineer (with German) to support international projects, building and maintaining scalable infrastructure and ensuring reliable delivery across diverse digital solutions.Andersen is a pre-IPO software development company that provides a full cycle of services, following project management standards and best practices. For over 19 years, we have been helping enterprises and middle-sized firms transform their businesses by creating effective digital solutions using innovative technologies. Immerse yourself in a global landscape as we collaborate with enterprises spanning North America, Western Europe, Israel, Australia, and the UAE. Our diverse expertise extends across FinTech, Healthcare, Retail, Telecom, Media & Entertainment, Logistics, Travel & Hospitality, eCommerce, and more. Responsibilities: • Designing, implementing, and maintaining CI/CD pipelines using modern CI/CD tools. • Managing cloud infrastructure primarily on AWS. • Provisioning and managing infrastructure using Terraform and Infrastructure as Code practices. • Building, deploying, and operating containerized applications using Docker and Kubernetes. • Writing and maintaining automation scripts in Python and Bash. • Monitoring system performance, availability, reliability, and troubleshooting production issues. • Collaborating with development teams to improve deployment workflows and system scalability. • Supporting and maintaining on-premises infrastructure where required. • Ensuring security, reliability, and best practices across cloud and infrastructure environments. Must-have: • Experience as a DevOps Engineer for 2+ years. • Strong knowledge and experience with AWS. • Experience with Docker and Kubernetes. • Experience with Terraform. • Knowledge of CI/CD tools. • Ability to write and understand scripts in Python and Bash. • Level of English – from Upper-Intermediate and above. • Level of German – from Intermediate+ and above. Nice-to-Haves: • Experience with Azure. • Experience with on-premises solutions. Reasons why this job would be interesting to you: • Andersen cooperates with such companies as Siemens, Johnson & Johnson, AstraZeneca, BNP Paribas, Allianz, Ryanair, TUI, Verivox, Media Markt, etc.. • For the past four years, our company has been growing annually by 60–100%, and we constantly involve top-notch specialists in our team. • Andersen has mentoring and adaptation systems for new employees, and transparent performance review and assessment systems will allow you to determine your development path and plan your growth. • The most important thing that we value in our employees is a commitment to continuous learning. The company supports them in this and gives them access to the best educational platforms, seminars, and practices. In addition, for over 19 years, Andersen has assembled a huge knowledge base and established a robust resource management institution. • We have been strengthening our expertise since 2007. During this time, we have formed excellent teams with streamlined processes, where you can learn something new from your colleagues every day and enjoy your work. • We are a cool young team of like-minded people communicating informally. • You'll have a stable and competitive salary and an extensive benefits package. • At Andersen, we have many different ways to grow. You can improve as a specialist or a manager, and all your activities will be decently rewarded. Your personal data is protected in accordance with GDPR regulations. Learn more: https://andersenlab.com/privacy-policy Join us! https://people.andersenlab.com/
DevOps Engineer (Azure)
Connectis_
⚲ Warszawa
28 000 - 33 000 PLN (B2B)
Wymagania
- Azure
- Python
- CI/CD
- AKS
- Kubernetes
- Docker
- IaC
- Terraform
- Observability
- Prometheus (nice to have)
- Thanos (nice to have)
- Ansible (nice to have)
- Puppet (nice to have)
O projekcie: Wspólnie z naszym Partnerem, renomowaną amerykańską firmą specjalizującą się w produkcji artykułów konsumenckich, poszukujemy doświadczonego specjalisty na stanowisko DevOps Engineer (Azure). Projekt dotyczy wdrożenia i rozwoju jednolitej platformy observability, która zbiera metryki, logi i trace z całej infrastruktury i aplikacji w skali korporacyjnej, umożliwiając wykrywanie, diagnozowanie i automatyczne rozwiązywanie problemów w systemach IT. OFERUJEMY: - Uczestnictwo w spotkaniach integracyjnych oraz meetupach technologicznych, umożliwiających dzielenie się wiedzą i doświadczeniem. - Wsparcie dedykowanego opiekuna Connectis, który zawsze jest dostępny, by pomóc Ci w sprawach związanych z projektem. - Stabilne i długoterminowe zatrudnienie w firmie o ugruntowanej pozycji na rynku. - 5000 PLN za polecenie znajomego do któregokolwiek z naszych projektów. - Możliwość rozwoju w nowoczesnym, dynamicznym środowisku IT. - Szybki, zdalny proces rekrutacyjny. - Pracę 100% zdalną. Dziękujemy za wszystkie zgłoszenia. Pragniemy poinformować że skontaktujemy się z wybranymi kandydatami. Wymagania: - Co najmniej 5 lat praktycznego doświadczenia na w roli DevOps Engineer oraz doświadczenie w pracy z chmurą Azure. - Praktyczna znajomość Infrastructure as Code (Terraform) do automatyzacji tworzenia i zarządzania zasobami. - Praktyczna znajomość AKS (Azure Kubernetes Service) w zakresie zarządzania klastrami Kubernetes. - Zdolność pisania skryptów w Pythonie do automatyzacji zadań i integracji systemów. - Bogate doświadczenie w pracy z platformą Azure oraz usługami chmurowymi! - Zdolność do pracy z Dockerem w kontekście konteneryzacji aplikacji. - Doświadczenie w tworzeniu i utrzymaniu pipeline’ów CI/CD. Mile widziane: - Doświadczenie w pracy z Prometheus i Thanos oraz szeroka wiedza z zakresu observability. - Znajomość narzędzi do automatyzacji konfiguracji, takich jak Ansible lub Puppet. Codzienne zadania: - Zarządzanie i automatyzacja infrastruktury platformy, oprogramowania oraz wzorców integracyjnych, ze szczególnym uwzględnieniem niezawodności i skalowalności. - Zapewnienie wysokiej dostępności i odporności usług platformy poprzez monitoring i proaktywne usprawnienia. - Automatyzacja tworzenia i konfiguracji infrastruktury platformy przy użyciu podejścia Infrastructure as Code. - Zarządzanie i optymalizacja klastrów Kubernetes w celu zapewnienia niezawodności i skalowalności. - Implementacja i utrzymanie pipeline’ów CI/CD dla komponentów platformy oraz integracji.
Administrator / Administratorka DevOps
ONITO sp. z o.o.
⚲ Białystok
10 000–12 000 zł brutto / mies.
Wymagania
- Linux
- Kubernetes
- HAProxy
- Proxmox
- FortiGate
- CEPH
- Git
- GitLab
- MySQL
- PostgreSQL
- Redis
- Jira
Nasze wymagania: dobra znajomość systemów operacyjnych z rodziny Linux (RedHat/Ubuntu), znajomość pojęć z zakresu sieci komputerowych, wiedza z zakresu wirtualizacji i kontenerów (K8s, Docker, KVM, Proxmox, VMware), znajomość rozwiązań SDS (np. CEPH, ZFS), znajomość serwerów web i proxy (Nginx, Apache, Tomcat, Haproxy), wiedza z zakresu konfiguracji urządzeń sieciowych np. FortiGate, znajomość git oraz koncepcji gitops, gotowość do pełnienia dyżurów serwisowych. Mile widziane: 2 lata doświadczenia w podobnej roli, certyfikat CKA lub RHCE, wiedza w zakresie działania aplikacji web, umiejętność posługiwania się rozwiązaniami klasy APM, znajomość baz danych (mysql, PostgreSQL, Redis) w zakresie instalacji, konfiguracji i podstawowej administracji, znajomość pakietu attlasian, m.in: Jira, samodzielność, obowiązkowość i punktualność. Zakres obowiązków: realizacja projektów z zakresu DevOps u klientów zewnętrznych, obsługa administracyjna aplikacji i środowisk IT, w chmurach obliczeniowych AWS, Azure i GCP a także systemach on-premise, analiza i rozwiązywanie problemów w złożonych środowiskach, utrzymanie monitoringu, logowania i backupu środowisk, wdrażanie rozwiązań CI/CD (gitlab, jenkins), automatyzacja i orkiestracja procesów informatycznych,budowa i zarządzanie, dedykowanymi klastrami k8s, nauka nowych technologii i chęć dalszego rozwoju. Oferujemy: możliwość rozwoju kompetencji w różnorodnych środowiskach chmury publicznej (AWS/Azure/CGP), chmury prywatnej (budowanie rozwiązań od zera), rozwiązaniach hybrydowych ale również on-premise, umowę o pracę, pracę zdalną w 100% (czasami będzie nam miło spotkać się na piwo), 8 dni szkoleniowych rocznie (dni które poświęcasz na naukę nowych technologii), prywatna opieka medyczna, karta multisport, elastyczny czas pracy.
👉 Senior MultiCloud DevOps/ Platform Engineer with Harness
Xebia sp. z o.o.
⚲ Wrocław, Rzeszów, Gdańsk, Warszawa
22 000 - 30 000 PLN netto (B2B)
Wymagania
- Cloud
- Harness
- GitOPS
- Terraform
- Kubernetes
- Python
🟣 You will be: owning and operating the Harness Platform by: • designing, implementing, and maintaining Harness pipelines for Kubernetes, ECS, serverless, and VM deployments, including canary/blue‑green strategies and automated rollbacks, • operating CI pipelines and shared build infrastructure, improving build performance and developer feedback loops, • configuring and managing Feature Flags to support progressive delivery and experimentation, • integrating Harness SRM/Chaos (if applicable) to support deployment verification, resilience testing, and error budget policies, • partnering with FinOps to leverage cost dashboards, budgets, and guardrails for cloud spend optimization; engineering Delivery Pipelines, Environments, and Infrastructure by: • creating reusable pipeline templates, governance controls, and “paved roads” for application teams, • implementing secrets management, artifact versioning, and environment promotion flows (dev → test → staging → prod), • standardizing infrastructure provisioning with Terraform, Helm/Kustomize, CloudFormation, and ARM/Bicep, • supporting Git-based workflows (GitHub, GitLab, Azure Repos, Bitbucket) and applying GitOps practices (Argo CD/Flux) where appropriate; strengthening Reliability, Security, and Compliance by: • embedding automated tests, security scans (SAST, DAST, dependency/image scanning, SBOM), and quality gates into CI/CD pipelines, • enforcing RBAC, least privilege, SSO/SCIM, and audit readiness across platforms, • contributing incident response, post-incident reviews, and the continuous evolution of SLIs/SLOs, building and integrating Observability and Performance Tooling by: • integrating observability systems (Prometheus/Grafana, OpenTelemetry, Datadog, New Relic) into deployment verification and runtime dashboards, • optimizing reliability, building performance, caching, architecting storage, and runtime platform performance; driving Collaboration and Enablement by: • onboarding product and engineering teams onto the Harness platform, • running enablement workshops, producing documentation, and maintaining self-service resources, • measuring and reporting delivery metrics such as lead time, deployment frequency, change fail rate, and MTTR — and driving improvement initiatives based on these insights. 🟣 Your profile: • ready to work in EST time (occasional overlap), • 5+ years in DevOps, Platform Engineering, or SRE roles, • 2+ years hands-on with Harness CI and/or CD, including pipelines-as-code, templates, governance, and rollout strategies, • strong experience with Kubernetes (operations, Helm/Kustomize, operators), • good proficiency with at least one major cloud (AWS, Azure, or GCP), • demonstrated expertise with Terraform, reusable modules, and multicloud provisioning (CloudFormation, ARM/Bicep), • hands-on experience with scripting (Bash, Python, or Go) and automation mindset, • experience with CI/CD and Git-based workflows, GitHub Actions or comparable CI tools, • familiarity with security integration (SAST/DAST, scanning, OPA/Conftest), • expertise with observability fundamentals (metrics, logs, traces), • ansible for configuration and orchestration, • upper intermediate/advanced English (B2/C1). • Work from the European Union region and a work permit are required 🟣 Nice to have: • GitOps (Argo CD/Flux), • Harness Feature Flags, SRM, Chaos, or Cloud Cost Management, • Kafka experience (operational or integration), • Elasticsearch cluster operations, • Redis (caching, broker patterns, session management), • FinOps exposure, • familiarity with compliance frameworks (SOC2, ISO27001, HIPAA, PCI), • SRE experience with SLOs, SLIs, and error budgets, • PKI, vaulting, workload identity solutions. 🟣 Recruitment Process: CV review – HR Interview – Technical Interview - Client Interview – Decision 🎁 Benefits 🎁 ✍ Development: • development budgets of up to 6,800 PLN, • we fund certifications e.g.: AWS, Azure, • access to Udemy, O'Reilly (formerly Safari Books Online) and more, • events and technology conferences, • technology Guilds, • internal training, • Xebia Upskill. 🩺 We take care of your health: • private medical healthcare, • multiSport card - we subsidise a MultiSport card, • mental Health Support. 🤸♂️ We are flexible: • B2B or employment contract, • contract for an indefinite period.
👉 Senior AWS DevOps
Xebia sp. z o.o.
⚲ Wrocław, Rzeszów, Gdańsk, Warszawa
22 000 - 30 000 PLN netto (B2B)
Wymagania
- AWS
- GitOPS
- Terraform
- Helm
- Kubernetes
- Python
🟣 You will be: • building and supporting the tools, processes and infrastructure empowering the faster delivery and scaling of software iterations (ideally with greater quality and the necessary operational insight), • ensuring availability, reliability and scalability of application infrastructure, • building and supporting continuous integration/delivery and release tools, • ensuring the right metrics are collected and monitored. 🟣 Your profile: • smart and tech-savvy engineer with 5+ years of experience working with DevOps practices and Continuous Delivery, • practical knowledge of AWS services, infrastructure and networking, • solid experience with Kubernetes (ideally EKS on AWS) and container orchestration, • Python knowledge, • hands-on with GitOps practices, preferably with ArgoCD, • strong skills in Terraform and Helm, • proficiency in Bash and PowerShell scripting, • experience with CI/CD pipelines and tooling (GitLab CI/CD, GitHub Actions, or similar), • experience with monitoring, observability, and logging tools, such as Prometheus, Grafana, AppDynamics, and OpenSearch, • security awareness (OWASP, encryption, secrets management), • very communicative and collaborative, with a strong sense of ownership, • upper intermediate/advanced English (B2/C1). 🟣 Nice to have: • AWS certifications such as Solutions Architect or Platform Engineer, • familiarity with FluxCD, • experience with Rancher, • knowledge of Keycloak. Work from the European Union region and a work permit are required 🟣 Recruitment Process: CV review – HR Interview – Technical Interview - Client Interview – Decision 🎁 Benefits 🎁 ✍ Development: • development budgets of up to 6,800 PLN, • we fund certifications e.g.: AWS, Azure, • access to Udemy, O'Reilly (formerly Safari Books Online) and more, • events and technology conferences, • technology Guilds, • internal training, • Xebia Upskill. 🩺 We take care of your health: • private medical healthcare, • multiSport card - we subsidise a MultiSport card, • mental Health Support. 🤸♂️ We are flexible: • B2B or employment contract, • contract for an indefinite period.
Starszy Inżynier Cloud / Starsza Inżynierka Cloud
DCG
⚲ Warszawa
120 - 150 PLN/h netto (B2B)
Wymagania
- GCP
- SAS Viya
- Terraform
- Docker
Jako firma rekrutacyjna jesteśmy świadomi, że każdy solidny biznes napędzają ludzie z odpowiednio dopasowanymi kompetencjami. Nasz styl zarządzania i partnerskie podejście pozwalają nam na elastyczne dostosowanie się do Twoich potrzeb i zapewniają pełne wsparcie podczas współpracy. W związku z ciągłym rozwojem i dużą liczbą projektów rekrutacyjnych, jakie prowadzimy dla naszych Partnerów, szukamy osoby na stanowisko: Starszy Inżynier Cloud / Starsza Inżynierka Cloud Praca w modelu hybrydowym (1x w tygodniu z Warszawskiego biura) Zakres obowiązków: • Projektowanie, wdrażanie i utrzymanie rozwiązań w chmurze GCP z wykorzystaniem platformy SAS Viya, w tym optymalizacja architektury, wydajności oraz kosztów środowisk (produkcyjnych i testowych) • Współpraca z zespołami biznesowymi i technicznymi w zakresie rozwoju oraz utrzymania rozwiązań opartych o SAS Viya w chmurze, w tym wsparcie wdrożeń i migracji • Wymagania: • Znajomość chmury publicznej (GCP) oraz dostępnych usług (doświadczenie 5lat +) • Znajomość technologii SAS Viya (doświadczenie 5lat +) • Doświadczenie w realizacji projektów w sektorze bankowym lub ubezpieczeniowym Mile widziane: • Znajomość rozwiązań związanych z bezpieczeństwem w chmurze • Znajomość rozwiązań identity management • Znajomość rozwiązań bezpieczeństwa sieciowego w chmurze publicznej (firewalls, VPNs, certificates, policies) • Umiejętność projektowania i tworzenia pipeline'ów CI/CD za pomocą narzędzi DevOps z wykorzystaniem narzędzi bezpieczeństwa • Znajomość i praktyczne wykorzystanie Infrastructure as Code (np.: Terraform, ARM Templates, Cloud Formation) • Praktyczna znajomość technologii wirtualizacji, konteneryzacji, Docker, Kubernetes • Znajomość systemów wersjonowania i zarządzania kodem źródłowym • Znajomość podstaw administracji systemami Linux lub Windows Server • Znajomość Ansible • Doświadczenie w tworzeniu skryptów w, np.: Bash, PowerShell, Python Oferujemy: • Dofinansowanie karty sportowej • Stałe wsparcie konsultanta • Program rekomendacji pracowników
Senior Kubernetes Engineer (up to 5M HUF sign-up bonus)
Origoss Solutions Ltd.
⚲ Budapest
22 026 - 27 532 PLN (PERMANENT)
Wymagania
- Kubernetes
- CI/CD
- GitLab
- ArgoCD
- Terraform
- Golang (nice to have)
O projekcie: About Us We believe in further expansion of cloud native technologies. The widespread acceptance of open-source solutions will play a key role in this. We have therefore been focusing on cloud native technologies by actively participating in the open-source community. We help transforming enterprises, scaling startups, and professional integrators in their successful cloud native journey, using our several decades of experience in consultancy, development, and support for legacy network and IT system operations. We are an international team working on diverse projects across multiple industries. Our culture is built on collaboration, flexibility, and continuous learning. About the Role We seek senior Kubernetes engineers to join our growing team. This position offers flexible remote or hybrid work options within the EU. You’ll collaborate with colleagues across borders and help build scalable, reliable infrastructure. Benefits - Flexible remote or hybrid setup (within the EU) - International projects with diverse teams - Growth opportunities in DevOps practices and cloud technologies Wymagania: Must have - Strong communication skills in English (our team's working language). - Hands-on experience with: - Kubernetes - CI/CD - GitLab - ArgoCD - Terraform - Eligibility to work in the EU (valid work permit or citizenship). Nice to Have - Proficiency in German - Developing operators in Go Codzienne zadania: - Manage Kubernetes clusters and infrastructure as code. - Build, maintain, and optimize CI/CD pipelines. - Collaborate with developers and other engineers on DevOps practices. - Support deployment and monitoring of cloud-native applications.
Senior DevSecOps Engineer
Monday.com
⚲ Warsaw
49 000 - 59 000 PLN (PERMANENT)
Wymagania
- CI/CD Pipelines
- Cloud security
- AI
- Terraform
- Cloud
- DevOps
- Python
- Kubernetes
- Communication skills
- AWS
- Security (nice to have)
- Web application firewall (nice to have)
- WAF (nice to have)
- Cloudflare (nice to have)
- AWS (nice to have)
- Datadog (nice to have)
- Go (nice to have)
- SonarQube (nice to have)
- TypeScript (nice to have)
O projekcie: At monday.com, we help teams get more work done. We are the best AI work platform that empowers teams to automate, build, and scale their impact end-to-end with tools that actually execute the work for you. With over $1B in ARR, 250,000+ customers, and a global team, we’re serious about building a product people love to use and giving our employees the same ownership and flexibility to shape the way the world works. We fully embrace the AI revolution and we equip you with AI-powered IDEs, customizable agent rules, prompt engineering tools to streamline your workflow and AI-infused CI/CD pipelines designed to boost speed and reliability. You'll also tap into AI-driven insights, helping you make smarter decisions, faster. The role is based in our Warsaw office - established in 2022, it is a growing hub for engineers who love solving impactful problems. Teams here work on a broad range of challenges that push the boundaries of our products and infrastructure. About the role Securing CI/CD Pipelines: - Implement and manage security controls for CI/CD pipelines. - Automate security testing and vulnerability management within the CI/CD process using tools like Terraform. - Collaborate with development teams to integrate security best practices and policies. Cloud Security Posture Management: - Develop and implement cloud security best practices and policies. - Continuously monitor cloud environments using tools like AWS Guard Duty, Wiz, Orca, DataDog and similar to ensure compliance with security standards. - Collaborate with cloud operations teams to identify and remediate security risks. - Managing security cloud configuration with tools like Terraform and CDK. Implementing Security Self Service approach: - Development security tools in the organization IDP - Testing/performing PoC of new security tools to increase efficiency development practices in the security context and foster Secure by Design principle. Salary & Compensation At monday.com, our total compensation package includes: - Base salary - Bonus target - Restricted Stock Units (RSUs) The total target monthly compensation for this role ranges from 49,000 PLN to 59,000 PLN. The total compensation package shown here reflects the current monday.com stock price and may change over time as the stock price varies. The target bonus and RSU grant are discretionary, depend on individual and company performance, and are subject to Board approval and the company’s equity plan, in line with monday.com’s bonus policy as updated from time to time. The benefits listed reflect programs currently offered in Poland; availability and terms may vary depending on the role, contract type, and company policy. These details do not constitute an employment offer or guarantee and may be modified at the company’s discretion, in accordance with applicable law. Wymagania: Your experience & skills - 5+ years of experience in DevOps/DevSecOps or related roles. - Passion for keeping systems secure. - Proficiency in one of the following languages: Python, Go, TypeScript. - Experience with Kubernetes. - Strong understanding of operating systems and networking. - Expertise implementing Shift Left/Secure by Design inside CI/CD pipelines using SAST/DAST tools such SonarQube, Dependabot alert, Wiz and others. - Experience with configuring and managing Web Application Firewalls (WAFs) such as AWS WAF, Cloudflare, or similar. - Excellent problem-solving and communication skills. Codzienne zadania: - We’re looking for a Senior DevSecOps to join our team, where you’ll have the chance to grow your career while solving impactful, high-scale problems. This role requires a keen understanding of security practices integrated within the software development lifecycle. The ideal candidate will play a crucial role in securing our CI/CD pipelines, working with Web Application Firewalls, and managing our Cloud Security Posture. A person in this role will be a part of the team serving a key entity in communication and synchronization between the several groups of stakeholders (Infrastructure, Development, Security), fostering a culture of security awareness and collaboration across all the teams.
Senior DevOps Engineer
Egnyte Poland
⚲ Poznań
Wymagania
- DevOps
- Solutions powered by AI
- Python
- IaC
Senior DevOps Engineer Poznan, Poland ABOUT EGNYTE Egnyte is the secure multi-cloud platform for content security and governance that enables organizations to better protect and collaborate on their most valuable content. Established in 2008, Egnyte has democratized cloud content security for more than 22,000 organizations, helping customers improve data security, maintain compliance, prevent and detect ransomware threats, and boost employee productivity on any app, any cloud, anywhere. For more information, visit www.egnyte.com. Our Production Engineering team enables Egnyte to provide customers access to their data 24/7 by providing best in class infrastructure. WHAT YOU’LL DO: • You will propose and utilize new AI-driven solutions to improve and automate our infrastructure. • You will seek out new AI solutions to increase the productivity of our development teams. • Design, build and maintain self-hosted and cloud environments to serve our own applications and services. • Collaborate with software developers to build stable, scalable and high-performance solutions. • Taking part in big projects like migrating solutions from self-hosted environments to the cloud, from virtual machines to Kubernetes, from monolith to microservices. • Proactively make our organization and technology better! • Advising others as to how DevOps can make a positive impact on their work. • Share knowledge, mentor more junior team members while also still learning and gaining new skills. • Maintain consistently high standards of communication, productivity, and teamwork across all teams. YOUR QUALIFICATIONS: • 5+ years of proven experience in a DevOps Engineer, System Administrator, working on infrastructure or build processes. • Proven track record in designing solutions powered by AI • You are enthusiastic about innovative (AI-driven) solutions and confident in implementing them. • Very good coding skills for writing automation solutions (Python or other scripting languages). • Fluency and strong proficiency in Infrastructure as Code (IaC) tools. • Strong proficiency in Configuration Management tools. • Experience with public cloud services. • Experience with metric-based monitoring solutions. • Very good understanding of both Kubernetes and Linux operating system on the administration level. • Ability to choose and leverage cloud managed solutions. • Drive to grow as a DevOps Engineer (we value open-mindedness and a can-do attitude). • Solid English skills to effectively communicate with other team members BENEFITS: • Attractive salary package based on skillset • Your own Egnyte account with lifetime access to 50TB of cloud storage • MyBenefit: you can choose a MultiSport card or gift cards every month • Private medical healthcare • In-house English classes
Senior DevOps Engineer
Schibsted &Vend Polska
⚲ Kraków
17 000 - 25 000 PLN (PERMANENT)
Wymagania
- DevOps
- Azure
- Docker
- AWS (nice to have)
O projekcie: About Podme and the Team Podme was founded in 2017 with a vision to enrich everyday life by taking audio entertainment to its full potential. Backed by Schibsted News Media - the largest media group in Scandinavia, Podme is your go-to place for quality podcasts and audiobooks, bringing you a fresh selection of curated content, with popular titles found nowhere else. As Podme's active user base is steadily growing, we are strengthening our infrastructure capabilities in Microsoft Azure to support higher traffic, resilient cloud services, and self-service infrastructure for product teams. To help us do this, we are looking for a Senior DevOps Engineer to join our Content Management team, distributed across Poland, Sweden & Bosnia. Tech Stack and Tools - Cloud: Azure (AKS, networking, platform services) - Containerization & Orchestration: Docker, Kubernetes - Messaging & Data: RabbitMQ, Redis - Automation & CI/CD: Azure DevOps, ArgoCD - Observability tools: Grafana, Prometheus - Infrastructure as Code: Terraform, GitOps Wymagania: - 5+ years experience in DevOps / Cloud / Infrastructure engineering (Senior level). - Strong experience with Azure and production Kubernetes clusters. - Deep knowledge of containerization (Docker) and orchestration. - Experience with Redis, RabbitMQ or similar technologies. - Demonstrable expertise with automation (IaC / CI/CD). - Familiarity with GitOps principles. - Strong Linux experience and scripting skills. - Ability to work across distributed teams and with cross-functional stakeholders. - Good command of English; Polish is a plus. - Experience in cloud security or general IT security is an advantage. Nice to Have - Experience with infrastructure security concepts, practices and tooling. - Experience in creating self-service platforms and developer enablement tools. - Familiarity with AWS cloud services. - Startup experience. Codzienne zadania: - Build self-service infrastructure workflows that enable product teams to provision environments and infra hosting independently. - Design, implement and maintain scalable cloud infrastructure with IaC and provisioning automation. - Improve system observability, logging, monitoring and alerting. - Build and optimize CI/CD pipelines for reliability and speed. - Contribute to cloud security and general IT security best practices. - Define and implement disaster recovery strategies and runbooks. - Collaborate with engineering and product teams across multiple countries. - Support operational excellence, uptime SLAs and incident response.
Senior Cloud & DevOps Engineer (AWS)
Chaos Gears
⚲ Warszawa
25 000 - 32 000 PLN netto (B2B)
Wymagania
- AWS
- Kubernetes
- Terraform
- Docker
- Jenkins
- CI/CD
- Python
- CloudWatch
- EKS
- EC2
Ready for an adventure with AWS? Join Us! As a leading AWS Partner in Europe, we support organizations at every stage of their cloud journey. Our strength comes from deep experience in cloud-native architectures and modernization initiatives. By joining our team, you’ll have the opportunity to shape secure, scalable platforms on AWS, work hands-on with advanced Data & AI technologies, and guide clients through transformative cloud projects. Here, your work will directly impact how businesses innovate, grow, and unlock new value through cloud technologies. As a Senior Cloud Engineer, you will have the opportunity to participate and lead projects across greenfield initiatives (designing and building infrastructure from scratch), AWS migrations and modernization efforts, as well as advanced Data and Big Data projects. This role will allow you to develop your expertise in cloud-native architecture, security, automation (IaC, CI/CD, Kubernetes), and working with large-scale data platforms. Additionally, you will be involved in FinOps initiatives, gaining hands-on experience in cost optimization and efficient cloud environment management. Your Responsibilities: • Architect and implement scalable, secure, and cost-effective cloud solutions on AWS, guiding clients through best practices in cloud-native design and modernization. • Design, build, and maintain AWS infrastructure using Infrastructure as Code (IaC) tools such as Terraform and AWS CDK. • Develop and manage CI/CD pipelines using tools like Github Actions, GitLab Pipelines, Jenkins or AWS CodePipeline, ensuring smooth and automated deployment processes. • Monitor, diagnose, and optimize system performance, ensuring high availability, fault tolerance, and scalability across distributed systems. • Manage containerized applications using Docker, and orchestrate workloads with Kubernetes, Amazon ECS, or EKS. • Implement robust security practices, including IAM policies, access control, encryption, and compliance auditing aligned with industry standards. • Deploy and manage observability stacks using tools such as Amazon CloudWatch, Prometheus, and Grafana for proactive monitoring and alerting. • Write code to automate tasks and workflows using scripting and programming languages such as Python, TypeScript, Go, or Bash. • Troubleshoot and resolve infrastructure and deployment issues, collaborating closely with development teams to ensure smooth delivery. • Maintain clear technical documentation, contribute to internal standards, and support knowledge sharing across teams. • Lead technical webinars, write blog posts, and publish case studies to share insights and promote thought leadership in the AWS ecosystem. • Represent the company at conferences and industry events, engaging with the cloud community and staying ahead of emerging trends. • Support pre-sales and solutioning activities, working with sales teams to design tailored cloud strategies and present technical solutions to clients. Requirements: • AWS: Architect and implement secure, scalable, and cost-effective cloud solutions on AWS, leveraging deep expertise across services such as EC2, EKS, Lambda, RDS, S3, IAM, VPC, API Gateway, CloudWatch, and more. Ensure alignment with AWS Well-Architected best practices in design, security, and operational excellence. • Infrastructure as Code (IaC): Design, build, and manage AWS infrastructure using Terraform and AWS CDK, enabling repeatable, version-controlled, and auditable infrastructure deployments. • CI/CD & Automation: Develop and maintain CI/CD pipelines using GitHub Actions, GitLab Pipelines, Jenkins, or AWS CodePipeline to support automated, zero-downtime deployments and rapid iteration cycles. • Programming: Expierence in software development and scripting using Python, TypeScript, Go, or Bash. • Containerization & Orchestration: Manage containerized applications using Docker and orchestrate workloads with Kubernetes, Amazon ECS, or EKS to support scalable microservices architectures. • Security, Compliance & Networking: Implement robust security practices including IAM policies, encryption, access control, and compliance auditing aligned with industry standards. Design secure, high-availability network architectures using VPC, Transit Gateway, Route 53, and hybrid connectivity solutions. • Monitoring & Observability: Deploy and manage observability stacks using Amazon CloudWatch, Prometheus, Grafana, and ELK/EFK to enable proactive monitoring, alerting, and incident response. • Consulting & Advisory: Act as a trusted advisor to enterprise clients, working closely with stakeholders to understand business goals and translate them into scalable, cloud-native solutions. Help organizations in cloud adoption, modernization, and digital transformation initiatives, accelerating time to market and driving measurable business outcomes. Support pre-sales and solutioning efforts by collaborating with sales teams to design tailored cloud strategies and cloud architectures. Represent the company at events, lead technical webinars, and contribute through blog posts and case studies. • Collaboration & Troubleshooting: Work closely with development and operations teams to troubleshoot and resolve infrastructure and deployment issues, ensuring smooth and reliable delivery pipelines. • Knowledge Sharing & Enablement: Maintain high-quality technical documentation, contribute to internal standards, and support knowledge sharing through wikis, runbooks, and architectural decision records. • Good understanding of software engineering best practices such as code reviews, source control management, build processes, testing, deployment, release, and change management. • AWS Certifications: Hold and be actively pursuing relevant AWS certifications (e.g., Solutions Architect, DevOps Engineer, Advanced Networking) to validate expertise in designing, deploying, and managing cloud-native solutions on AWS. • Fluency in both English and Polish, written and spoken. Benefits: • Continuous learning and growth – we invest in your development through internal training, knowledge-sharing sessions, and hands-on learning from real projects. We also support growth beyond the organization by actively engaging in the AWS Community, including conference talks and industry events. • Training budget – dedicated funds to support certifications, courses, and professional development aligned with your career goals. • Medical care package – comprehensive private healthcare to help you stay healthy and focused. • Multisport card co-financing – because staying active matters, both in and outside of work. • Language learning platform access – improve your language skills at your own pace, whenever it suits you. • Flexible working hours & remote work – work when and where you’re most productive. • Company events – time for integration and some fun. Stay up to date with us: Follow our blog regularly and participate in the events we organize.
Senior Azure DevOps Engineer
Euvic S.A.
⚲ Remote
23 520 - 26 040 PLN (B2B)
Wymagania
- Python
- Azure
- DevOps
- Cloud
- Microsoft Azure
- Azure Kubernetes Service
- Azure Storage
- Vault
- PaaS
- GitHub Actions
- Jenkins
- Azure DevOps
- Kubernetes
- PowerShell
- FinOps (nice to have)
- Terraform (nice to have)
- Bicep (nice to have)
O projekcie: Jesteśmy polską firmą IT o globalnym zasięgu. Realizujemy projekty dla międzynarodowych organizacji i obecnie rozbudowujemy zespół inżynieryjny do projektu dla klienta z branży farmaceutycznej. Projekt dotyczy budowy i migracji rozwiązań opartych o Microsoft Azure, w tym środowisk AKS, usług PaaS, automatyzacji procesów CI/CD oraz wdrożeń produkcyjnych. Poszukujemy osoby, która od pierwszego dnia będzie w stanie przejąć odpowiedzialność za kluczowe elementy infrastruktury i procesów DevOps w Azure. Metodyka: Agile / Scrum Długość trwania projektu: planowany jest long-term, min. 9 miesięcy (z opcją przedłużenia) Model pracy: 100% praca zdalna WAŻNE INFORMACJE: - W trakcie procesu rekrutacji na kolejnych jej etapach wymagana będzie weryfikacja tożsamości kandydata. Euvic - ponad 5000 pracowników i 20 lat doświadczenia niewątpliwie czynią nas dużą firmą. A jednak, mimo upływu czasu i stale powiększającego się grona klientów wciąż jesteśmy firmą stworzoną przez inżynierów IT dla inżynierów IT. Tworzymy prężnie rozwijającą się firmę „z ludzką twarzą”, składającą się z dobrych ludzi, tworzących dobre oprogramowanie. Czyni nas to solidnym partnerem IT skupiających się na własnych osiągnięciach, sukcesach klientów i satysfakcji zawodowej pracowników. Co oferujemy? - sprawny, szybki i przyjazny proces rekrutacyjny (nie tylko umiejętności są dla nas ważne, wspólnie sprawdźmy, czy pasujesz do the Good People), - pracę i rozwój w renomowanej firmie IT, z ugruntowaną pozycją na rynku, - projekty dla klientów międzynarodowych z wielu branż, - płaską strukturę, politykę „otwartych drzwi” i dbałości o dobre relacje w zespołach, - małe zespoły w dużej firmie i koleżeńską wspólnotę w tych zespołach, - transparentny i atrakcyjny program szkoleniowy oparty na badaniach kompetencji, - bonusowy program poleceń, - wsparcie buddiego w okresie onboardingu, - rozwój wewnątrz organizacji – szkolenia wewnętrzne, „Euvic Talks”, możliwość prowadzenia wewnętrznych szkoleń, - udział w inicjatywach społecznych – wspieramy potrzebujących z udziałem Fundacji The Good People. W Euvic tworzymy środowisko pracy oparte na równości, szacunku i otwartości. Wierzymy, że różnorodność talentów i perspektyw napędza innowacje oraz wspiera zrównoważony rozwój. Dlatego nasze procesy rekrutacyjne są projektowane w sposób, który zapewnia każdemu kandydatowi równe szanse, niezależnie od pochodzenia, płci, wieku, orientacji seksualnej, stanu zdrowia czy innych indywidualnych cech. Dążymy do budowania kultury pracy, w której każdy może rozwijać swój potencjał i czuć się w pełni akceptowany. Zastanawiasz się czy warto? W naszym wewnętrznym badaniu #theGoodPeople wskazali 4 kluczowe aspekty, które cenią najbardziej: 1. Sam/a decyduję o tym, jak najlepiej wykonać swoją pracę. 2. Mój manager/ka jest dostępny, otwarty na sugestie i odmienność zdania. 3. Jestem traktowany/a z należytym szacunkiem. 4. W moim zespole panuje atmosfera sprzyjająca współpracy. Brzmi jak dobre miejsce pracy dla Ciebie? Wybierz swoje benefity: 1. System kafeteryjny (m.in karta Multisport, bilety na wydarzenia sportowe i muzyczne oraz wiele, wiele więcej), 2. Opieka medyczna, 3. Ubezpieczenie na życie z możliwością wykupienia dodatkowych pakietów np. opieki onkologicznej, 4. Inicjatywy rozwojowe. Zapraszamy do #theGoodPeople - profesjonaliści, pasjonaci, dobrzy ludzie 🙂 Wymagania: - Kilkuletnie doświadczenie w obszarze DevOps / Cloud Engineering, - Bardzo dobra znajomość Microsoft Azure, w szczególności: - Azure Kubernetes Service (AKS), - Azure Storage, - Azure Key Vault, - usługi PaaS (App Services, Functions, Integration Services itp.), - Doświadczenie w budowie i utrzymaniu CI/CD (GitHub Actions, Jenkins, Azure DevOps), - Bardzo dobra znajomość Gita i pracy z repozytoriami, - Doświadczenie w deploymentach produkcyjnych (kontenery, Kubernetes, PaaS), - Umiejętność automatyzacji w Pythonie lub PowerShell. Mile widziane: - Doświadczenie z FinOps lub optymalizacją kosztów chmurowych, - Znajomość Terraform/Bicep, - Doświadczenie z monitoringiem i observability w Azure, - Praca z narzędziami wspierającymi automatyzację i jakość (pytest, Devcontainers, GitHub Advanced Security). Codzienne zadania: - Projektowanie i implementacja rozwiązań w oparciu o Microsoft Azure, - Praca z: AKS, Azure Storage, Key Vault, usługami PaaS (App Services, Functions, API Management), - Tworzenie i rozwój pipeline’ów CI/CD (GitHub Actions, Jenkins, Azure DevOps), - Automatyzacja procesów build/test/deploy, - Utrzymanie i rozwój środowisk produkcyjnych opartych o kontenery i PaaS, - Współpraca przy projektowaniu architektury chmurowej i bezpieczeństwa, - W procesie wytwórczym wykorzystywane są również narzędzia takie jak GitHub Copilot oraz rozwiązania oparte o Azure OpenAI, - W zależności od doświadczenia możliwa rola Mid lub Senior.
Senior AWS DevOps
Xebia sp. z o.o.
⚲ Wrocław, Gdańsk, Rzeszów, Warszawa
22 000 - 30 000 PLN (B2B)
Wymagania
- AWS
- Python
- Kubernetes
- Git
- Terraform
- Helm
- Devops
O projekcie: Who We Are While Xebia is a global tech company, our journey in CEE started with two Polish companies – PGS Software, known for world-class cloud and software solutions, and GetInData, a pioneer in Big Data. Today, we’re a team of 1,000+ experts delivering top-notch work across cloud, data, and software. And we’re just getting started. What We Do We work on projects that matter – and that make a difference. From fintech and e-commerce to aviation, logistics, media, and fashion, we help our clients build scalable platforms, data and AI solutions, and cutting-edge applications to shape the future of tech. Our clients include McLaren, Aviva, Deloitte, Spotify, Disney, ING, UPS, Tesco, Truecaller, AllSaints, Volotea, Schmitz Cargobull, Allegro, InPost, and many, many more. We value smart tech, real ownership, and continuous growth. We use modern, open-source stacks, and we’re proud to be trusted partners of Databricks, dbt, Snowflake, Azure, GCP, and AWS. Fun fact: we were the first AWS Premier Partner in Poland! Beyond Projects What makes Xebia special? Our community. We support tech communities, organize meetups (Software Talks, Data Tech Talks), and have a culture that actively support your growth via Guilds, Labs, and personal development budgets — for both tech and soft skills. It’s not just a job. It’s a place to grow. What sets us apart? Our mindset. Our vibe. Our people. And while that’s hard to capture in text – come visit us and see for yourself. Wymagania: Your profile: - ready to start immediately, - smart and tech-savvy engineer with 5+ years of experience working with DevOps practices and Continuous Delivery, - practical knowledge of AWS services, infrastructure and networking, - solid experience with Kubernetes (ideally EKS on AWS) and container orchestration, - Python knowledge, - hands-on with GitOps practices, preferably with ArgoCD, - strong skills in Terraform and Helm, - proficiency in Bash and PowerShell scripting, - experience with CI/CD pipelines and tooling (GitLab CI/CD, GitHub Actions, or similar), - experience with monitoring, observability, and logging tools, such as Prometheus, Grafana, AppDynamics, and OpenSearch, - security awareness (OWASP, encryption, secrets management), - very communicative and collaborative, with a strong sense of ownership, - upper intermediate/advanced English (B2/C1). Work from the European Union region and a work permit are required. Nice to have: - AWS certifications such as Solutions Architect or Platform Engineer, - familiarity with FluxCD, - experience with Rancher, - knowledge of Keycloak. Codzienne zadania: - building and supporting the tools, processes and infrastructure empowering the faster delivery and scaling of software iterations (ideally with greater quality and the necessary operational insight), - ensuring availability, reliability and scalability of application infrastructure, - building and supporting continuous integration/delivery and release tools, - ensuring the right metrics are collected and monitored.
SAP ABAP Developer
Atos Poland Global Services Sp. z o.o.
⚲ Wrocław, Psie Pole
Wymagania
- SAP HANA
Nasze wymagania: Experience in S/4HANA Private Edition greenfield projects. Strong knowledge of modern ABAP, BTP cloud-native development, integration, and security. At least one full lifecycle implementation as Senior/Lead Developer. Understanding of DevOps/CI/CD and quality standards. Fluent English Mile widziane: Fiori elements, Embedded Analytics, BAdIs, In-App Extensibility. O projekcie: We are looking for a SAP ABAP Developer experienced in S/4HANA Private Edition greenfield implementations based on SAP Activate and SAP Best Practices. Zakres obowiązków: Develop and deliver modern ABAP solutions in an S/4HANA greenfield project. Apply Clean Core principles and use only released APIs/whitelisted objects. Choose the right extensibility option (In-App, ABAP Cloud, BTP side-by-side). Implement event-driven extensions using Enterprise Event Enablement/Event Mesh. Collaborate with functional and technical teams to ensure integration, security, and quality. Oferujemy: Access to training and certifications (access to SAP Hub Learning platform). Hybrid mode of work (only max. 2 times per month based on your location). Wellbeing programs & work-life balance - integration and passion sharing events Private medical and dental care Benefits platform – shopping, cinema, sport etc. Co-funding of sport activities, e.g., Multisport & OK system cards, b-active program Gift packages for special occasions: Easter, Christmas, Children’s Day Appreciation for seniority: additional days off, Atos Jubilee gifts Charity and eco initiatives Flexible working hours
Observability Lead
SQUARE ONE RESOURCES sp. z o.o.
⚲ Warszawa, Mokotów
Wymagania
- DevOps
Nasze wymagania: 10+ years of experience leading complex, cross-functional IT initiatives, including observability or monitoring transformations. Strong IT background in Infrastructure, DevOps, SRE, or Engineering. Proven experience integrating platforms in enterprise environments, including hybrid and multi-cloud setups. Excellent stakeholder management and communication skills, including working with senior leadership. Hands-on, pragmatic, collaborative, and able to work both strategically and operationally. O projekcie: Drive the strategy, design, and implementation of our new observability platform. Coordinate across teams, guiding tool selection, and ensuring successful integration across enterprise platforms and hybrid environments. Zakres obowiązków: Lead the observability workstream end-to-end, from strategy through implementation and adoption. Own the evaluation, selection, and rollout of the enterprise observability platform. Design and implement integrations across cloud and on-premise environments. Define and promote best practices for logs, metrics, traces, dashboards, alerting, and event correlation. Establish cost-efficient ingestion, storage, and operational models to optimise performance and scalability. Provide clear communication and reporting to Directors, VPs, and senior stakeholders. Provide hands-on technical support and guidance to teams. Drive documentation, enablement, and change management to ensure adoption across the organisation.
Observability Lead
SquareOne
⚲ Remote
25 200 - 33 600 PLN (B2B)
Wymagania
- Logistics
- Cloud
- Storage
- DevOps
- SRE
- Stakeholder management
- Communication skills
O projekcie: The leading European specialist in spare parts and technical solutions for the agricultural, forestry, and construction sectors. With operations in more than 20 countries, client combines logistics excellence, deep domain knowledge, and smart digital solutions to make life easier for our customers. About the role We’re looking for an experienced Observability Lead to drive the strategy, design, and implementation of our new observability platform. This role will lead the workstream on a daily basis, coordinating across teams, guiding tool selection, and ensuring successful integration across enterprise platforms and hybrid environments. Some facts about our client - We work in a Product Model and use data to drive decisions. - We invest in people – you own your growth, with support when you need it. - With 60+ nationalities, we are an open, inclusive, international community. - We value transparency, collaboration, and pragmatic decision-making. Wymagania: - 10+ years of experience leading complex, cross-functional IT initiatives, including observability or monitoring transformations. - Strong IT background in Infrastructure, DevOps, SRE, or Engineering. - Proven experience integrating platforms in enterprise environments, including hybrid and multi-cloud setups. - Excellent stakeholder management and communication skills, including working with senior leadership. - Hands-on, pragmatic, collaborative, and able to work both strategically and operationally. Codzienne zadania: - Lead the observability workstream end-to-end, from strategy through implementation and adoption. - Own the evaluation, selection, and rollout of the enterprise observability platform. - Design and implement integrations across cloud and on-premise environments. - Define and promote best practices for logs, metrics, traces, dashboards, alerting, and event correlation. - Establish cost-efficient ingestion, storage, and operational models to optimise performance and scalability. - Provide clear communication and reporting to Directors, VPs, and senior stakeholders. - Provide hands-on technical support and guidance to teams. - Drive documentation, enablement, and change management to ensure adoption across the organisation.
Młodszy administrator sieci i systemów
GLOBITEL sp. z o.o.
⚲ Łódź, Polesie
Wymagania
- Linux
- Ubuntu
- Debian
- CentOS
- Cisco
- Windows Server
- Zabbix
- MySQL
- PostgreSQL
- Python
- Microsoft Excel
Nasze wymagania: Wymagania konieczne: - doświadczenie w administracji systemami Linux - znajomość języka angielskiego na poziomie komunikatywnym w piśmie i mowie - umiejętność samodzielnego rozwiązywania problemów - umiejętność szybkiej i samodzielnej nauki Mile widziane: - znajomość zagadnień z obszaru sieci IP - znajomość technologii VoIP/SIP - znajomość ogólnych zagadnień telekomunikacyjnych Zakres obowiązków: - utrzymanie i rozwój infrastruktury serwerów Linux - utrzymanie usług i sieci IP - utrzymanie systemów VoIP/SIP - utrzymanie systemów Linux - uczestnictwo w zespole tworzącym nowe rozwiązania w obszarze VoIP/SIP - zapewnienie bezawaryjnej pracy systemów objętych nadzorem - tworzenie i aktualizacja niezbędnej dokumentacji - wsparcie dla klienta wewnętrznego i biznesowego - udział we wdrożeniach dla klienta wewnętrznego i biznesowego Oferujemy: ciekawą pracę w dynamicznie rozwijającej się firmie stabilne zatrudnienie przyjazną atmosferę pracy wynagrodzenie uzależnione od posiadanych umiejętności i zaangażowania
Medior Kubernetes Engineer (up to 3,8M HUF sign-up bonus)
Origoss Solutions Ltd.
⚲ Budapest
15 418 - 20 924 PLN (PERMANENT)
Wymagania
- Kubernetes
- CI/CD
- GitLab
- ArgoCD
- Terraform
- K8s
- Jenkins
O projekcie: About Us We believe in further expansion of cloud native technologies. The widespread acceptance of open-source solutions will play a key role in this. We have therefore been focusing on cloud native technologies by actively participating in the open-source community. We help transforming enterprises, scaling startups, and professional integrators in their successful cloud native journey, using our several decades of experience in consultancy, development, and support for legacy network and IT system operations. We are an international team working on diverse projects across multiple industries. Our culture is built on collaboration, flexibility, and continuous learning. About the Role We seek medior Kubernetes engineers to join our growing team. This position offers flexible remote or hybrid work options within the EU. You’ll collaborate with colleagues across borders and help build scalable, reliable infrastructure. Benefits - Flexible remote or hybrid setup (within the EU) - International projects with diverse teams - Growth opportunities in DevOps practices and cloud technologies Wymagania: Must have - Strong communication skills in English (our team's working language). - Hands-on experience with: - Kubernetes - CI/CD - GitLab - ArgoCD - Terraform - Eligibility to work in the EU (valid work permit or citizenship). Nice to Have - Proficiency in German Codzienne zadania: - Manage Kubernetes clusters and infrastructure as code. - Build, maintain, and optimize CI/CD pipelines. - Collaborate with developers and other engineers on DevOps practices. - Support deployment and monitoring of cloud-native applications.
ML / DevOps Engineer (m/f/d)
Selvita
⚲ Kraków
Wymagania
- GitLab CI/CD
- Helm / Kustomize
- Docker
- Linux
- MLOps tools (MLflow, Kubeflow, Airflow)
- Kubernetes
- IaC (Terraform/Ansible)
- GPU on K8s
- Python
- scikit-learn, PyTorch / TensorFlow
Selvita is a global integrated service provider with laboratories in Poland (Krakow, Poznan, and Wroclaw), Croatia (Zagreb), and offices in Cambridge, UK, as well as the Greater Boston Area and San Francisco Bay Area in the US. Our company is dedicated to delivering comprehensive solutions that support our international clients in drug discovery and drug development. As one of the largest preclinical contract research organizations in Europe, Selvita is driven by a clear mission: to bridge the gap between early drug discovery and the clinical stage of drug development. At Selvita, we value partnership, excellence, passion, commitment, and integrity. We build strong, collaborative relationships with our clients and commit to the highest standards of quality. We fuel our pursuit of innovative solutions with enthusiasm, advance drug development with focused dedication, and ensure honesty and transparency in all our work. We are seeking a ML / DevOps Engineer to join our team to support the software development and biotechnology departments, addressing the evolving needs of both areas. You will work on internal projects within a biotechnology environment focused on AI, GPU cluster operations, and building the company’s global MLOps stack. This is a newly created role that combines maintaining and developing DevOps infrastructure with building MLOps capabilities from the ground up. In addition to daily responsibilities related to CI/CD pipelines, Kubernetes environments, and system reliability, you will play a key role in designing and implementing our future MLOps ecosystem. This position requires a hands-on, proactive mindset – we are looking for someone who enjoys working in less-defined environments, is comfortable with manual and evolving processes, and is motivated to create new solutions rather than follow fully established ones. Join us at Selvita, where these values guide our mission to advance drug discovery to the clinical stage of drug development. Key Responsibilities • Design, develop, and maintain end-to-end CI/CD pipelines in GitLab for ML apps/services (tests, quality gates, image building, artifact versioning, multi-env deployments). • Containerize apps/services with Docker, prepare Helm/K8s manifests. • Maintain Kubernetes clusters: stability, security, scalability, collaborate on changes and troubleshooting. • Manage on-prem K8s clusters: upgrades, config, capacity planning, issue support. • Implement observability (Prometheus, Grafana, ELK/Loki) for metrics, logs, alerts. • Handle model/data versioning and artifacts with Data Science teams. • Basic Python/ML integration for production models (API/batch). • Ability to work in a dynamic environment and build solutions from scratch. Your Background • 3+ years DevOps/SRE/MLOps experience with production systems. • Strong Kubernetes: workloads (Deployments, StatefulSets, Jobs/CronJobs), networking (Ingress/Service), storage (PVC/CSI), autoscaling, prod diagnostics. • Docker expertise (builds, optimization, hardening), Helm/Customize. • GitLab CI/CD pipelines (environments, artifacts, rollouts/rollbacks). • Python + ML libs (scikit-learn, PyTorch/TensorFlow) for model packaging. • Observability: Prometheus/Grafana/ELK-Loki, alerts & incident analysis. • Linux + automation (bash, IaC welcome). • Fluent English and Polish (both written and spoken). • Creative, open-minded, and proactive approach to problem-solving. • Ability to work in a dynamic environment and build solutions from scratch. Nice to Have • GPU on K8s (labeling, affinities, monitoring). • MLOps tools (MLflow, Kubeflow, Airflow). • IaC (Terraform/Ansible). • Regulated environments (pharma/finance). Your Benefits Package • Daily cooperation and know-how exchange with experts. • Additional benefits: a prepaid lunch card, private medical care, subsidized sports card, and office fruit provision. • Internal Development initiatives, including soft & leadership skills training programs. • Recognition Program. • Employee Referral Program. • Support & incentive bonus for completing Ph.D. • Various sports and engagement initiatives.
Lead Network Infrastructure Engineer (Cisco)
Capgemini Polska
⚲ Gdańsk, Katowice, Kraków, Lublin, Opole, Poznań, Warszawa, Wrocław
Wymagania
- Ansible
- Python
- BGP
- OSPF
- Cisco
- Arista
- Cisco ASA
- Fortigate
- Checkpoint
- Cisco ISE
Nasze wymagania: Minimum 8+ years experience in Network Engineering, with at least 3 years in a technical lead or managerial role. Bachelor’s degree in IT, Computer Science, or a related field (or equivalent experience). Certifications: CCNP (Preferred), CCIE (Preferred) Good knowdlege on Switching and Routing especially on BGP, OSPF, VPC, and MLAG within Cisco and Arista environments. Proven experience managing Cisco ASA, Fortigate, and Checkpoint; deep knowledge of Cisco ISE. Expert-level packet capture analysis and performance tuning for WAN accelerators. Practical knowledge of Network-as-Code (Ansible/Python) to streamline repetitive tasks. Fluent English with the ability to simplify technical jargon for non-technical executives. A "see-it-through" mindset for complex customer issues and long-term projects. O projekcie: Choosing Capgemini means choosing a company where you will be empowered to shape your career in the way you’d like, where you’ll be supported and inspired by a collaborative community of colleagues around the world, and where you’ll be able to reimagine what’s possible. Join us and help the world’s leading organizations unlock the value of technology and build a more sustainable, more inclusive world. Your role As a Lead Infrastructure Engineer for Poland, you will serve as the primary representative of the Network Services organization for Poland GDC leadership teams. You will oversee critical network operations, guide technical teams, manage major incidents, and ensure seamless service delivery across a complex enterprise environment. This role blends deep technical expertise with strong leadership, communication, and operational excellence. Zakres obowiązków: Stakeholder & Business Alignment: Ensure continuous alignment between network operations and business priorities, translate business needs into actionable technical strategies. Escalation Management: serve as the final point of contact for critical (P1/P2) network outages, lead Major Incidents and Potential Major Incidents, ensuring rapid service restoration, coordinate with global network teams and vendors to drive efficient troubleshooting, engage the appropriate technical experts and manage Root Cause Analysis (RCA) to prevent recurrence. Service Delivery: oversee 24/7 support operations, including rotational schedules and on-call structures, ensure operational readiness, process adherence, and performance across all support teams, monitor SLA compliance and implement continuous improvements to operational efficiency. Oferujemy: Practical benefits: company car, yearly financial bonus, private medical care with Medicover with additional packages (e.g., dental, senior care, oncology) available on preferential terms, life insurance and access to NAIS benefit platform. Award-winning development programs to support your career at every stage. Connected Manager - our leadership development program has already helped over 300 employees accelerate their growth! Access to over 70 training tracks with certification opportunities (e.g., GenAI, Excel, Business Analysis, Project Management) on our NEXT platform. Dive into a world of knowledge with free access to Education First languages platform TED Talks and Udemy Business materials and trainings. Cutting-Edge Technology: Position yourself at the forefront of IT innovation, working with the latest technologies and platforms. Capgemini partners with top global enterprises, including 145 Fortune 500 companies. Power to build a sustainable, more inclusive future: Be the change and engage in our initiatives like IT Girls rEvolution Academy, CharITy Hackathon, Tree Planting or Grant Program.
Guidewire Developer
TeamQuest
⚲ Warsaw
19 000 - 25 000 PLN (B2B / UoP)
Wymagania
- Gosu
- Azure
- Python
Our client is a modern insurance company that has been revolutionizing the online insurance services market since 2026. As part of the renowned Compensa group, it offers a broad range of insurance services, from vehicle OC/AC, home and apartment insurance, school accident insurance (NNW), to specialized motorcycle and motorcycle accessories insurance. The company stands out for its flexible customer approach, fast online service, and the ability to tailor insurance to individual needs. Our client's team is continuously growing, seeking talents from various fields who wish to join a dynamic environment where innovation and quality of customer service are paramount. If you are an insurance specialist, agent, customer service employee, or have experience in sales and technical support, our client offers a career path in a company that focuses on development and employee satisfaction. Guidewire Developer We offer: - Attractive salary - Non-financial benefits - Hybrid work Responsibilities: - Work hands-on with the Guidewire Insurance Suite, a leading platform in the global P&C insurance market - Use cutting-edge technologies in a cloud-native environment as: Azure, microservices, CI/CD - Collaborate in an agile, cross-functional team of developers, DevOps engineers, and business experts - Enjoy real autonomy and make a tangible impact on the technical direction of the platform and our business - Stay current with the best practices in software engineering through continuous learning and innovation Requirements: - Minimum 2-3 years of experience on similar position - Knowledge of Gosu, the native language of Guidewire - Curiosity and motivation to learn new technologies and explore modern development patterns - A collaborative mindset and experience working with both technical and non-technical stakeholders - Problem-solving skills and the ability to find smart, pragmatic solutions in complex environments - Proficiency in English-you can confidently read, write, and speak in an international team Additional advantages: - Experience working with microservices architecture - Familiarity with both relational and NoSQL databases - Experience with Python, GenAI technologies, or - Databricks for data and AI-driven applications - Passion for tech chats over coffee-our best ideas often come during informal discussions
Gerrit Expert (DevOps / Infrastructure)
emagine Polska
⚲ Warsaw
Wymagania
- Documentation
- Research & Development (R&D)
- Software as a Service (SaaS)
- Proof of Concept (PoC)
- Operations
- Advisory
- Cloud
- Microsoft Azure
- DevOps
- CI/CD
Gerrit Expert (DevOps / Infrastructure) Location: European time zone Teams based in Paris Duration 3 months (Proof of Concept phase) Start - ASAP About the Role We are looking for a Gerrit Expert (DevOps / Infrastructure) to support a strategic initiative aimed at transforming how Gerrit is hosted and used across multiple R&D sites worldwide. You will contribute to a Proof of Concept (PoC) designed to standardize, scale, and improve the performance of our Gerrit ecosystem, currently fragmented across several independent instances. Gerrit has been used for several years across 7 to 10 international R&D sites, with approximately 50 users per site. Current challenges include: • Lack of standardization and collaboration • Multiple independent Gerrit instances • Version inconsistencies • Limited internal expertise and support • Data synchronization and performance issues The goal is to define a robust, scalable, and unified Gerrit architecture, while keeping infrastructure and operations managed internally. Responsibilities in short: • Validate and improve the target Gerrit architecture • Design and implement multi-site replication strategies • Ensure data consistency across distributed environments • Define standardization practices (versions, configurations, workflows) • Support CI/CD integration and artifact management • Contribute to the setup and execution of the PoC Key Responsibilities Architecture & Design • Review and challenge the current architecture (3rd design iteration) • Recommend a scalable and resilient Gerrit hosting model • Advise on on-premise and hybrid/cloud (Azure) approaches Replication & Data Management • Design and guide implementation of:• Master/slave replication • Multi-site synchronization • Local replicas for offline resilience • Ensure data integrity across all sites DevOps & Integration • Integrate Gerrit into the CI/CD ecosystem • Provide best practices for operations, monitoring, and performance • Support artifact management and automation Advisory & Knowledge Transfer • Share best practices from previous Gerrit implementations • Support internal teams (DevOps & infrastructure) • Contribute to documentation and long-term sustainability Must-have • Strong expertise in Gerrit administration and architecture • Solid DevOps / infrastructure background • Experience with:• Distributed systems and replication • CI/CD pipelines • Multi-site environments Nice-to-have • Experience with Azure cloud • Knowledge of artifact management solutions • Experience in large-scale R&D environments Soft Skills • Ability to work with both technical and business stakeholders • Strong communication and advisory skills • Autonomous and proactive mindset
DevOps Warszawa / Łódź / Gdańsk (m,k,n)
Upvanta
⚲ Warszawa, Łódź, Gdańsk
1 000 - 1 000 PLN (B2B)
Wymagania
- DevOps
- Kubernetes
- K8s
- SQL
- WebLogic
- Jenkins
- Bitbucket
O projekcie: Poszukujemy DevOps Engineera z doświadczeniem w obszarze danych, który dołączy do zespołu pracującego nad nowoczesnymi rozwiązaniami data-driven. Projekt obejmuje rozwój i utrzymanie środowisk opartych o Databricks oraz AWS, ze szczególnym naciskiem na automatyzację i optymalizację procesów. Oferujemy - Stabilną współpracę przy projektach w sektorze bankowym - Model pracy hybrydowej - Możliwość rozwoju w środowisku enterprise - Pracę z nowoczesnymi narzędziami DevOps Wymagania: Wymagania - - Wykształcenie wyższe informatyczne lub pokrewne (mile widziane) - Minimum 3 lata doświadczenia na stanowisku DevOps Engineer - Doświadczenie z: - Kubernetes (K8S) - SQL - WebLogic - Bardzo dobra znajomość procesów i praktyk DevOps - Znajomość Jenkins i Bitbucket (mile widziane) - Bardzo dobra znajomość języka angielskiego (B2/C1) Codzienne zadania: - Utrzymanie aplikacji opartej o WebLogic hostowanej na platformie ESP - Zapewnienie aktualizacji patchy WebLogic - Dostosowywanie infrastruktury do nowych wersji aplikacji - Troubleshooting oraz szybkie rozwiązywanie problemów zgłaszanych przez Developerów, Testerów oraz Operations - Utrzymanie aplikacji działających na serwerach Linux - Weryfikacja podatności bezpieczeństwa - Zamawianie nowych komponentów infrastruktury - Wsparcie przy incydentach oraz troubleshooting - Wsparcie procesów ISO enrichment oraz Sek Batch Release - Wsparcie aplikacji caar-dash hostowanej na ESP - Troubleshooting Jenkins - Wsparcie integracji UC4
DevOps Team Leader (m/f)
Grupa Pracuj
⚲ Warszawa, Wola
21 000–26 000 zł brutto / mies.
Wymagania
- Azure DevOps
- Kubernetes
- ITIL
Nasze wymagania: Masz min. 2–3 lata doświadczenia w zarządzaniu zespołem IT/DevOps/Infrastruktury Masz doświadczenie w budowaniu zespołu, prowadzeniu ocen i rozwoju Pracowników Jesteś komunikatywny/a i potrafisz budować relacje oraz skutecznie komunikować wizję techniczną zespołowi i interesariuszom biznesowym Potrafisz zarządzać priorytetami między utrzymaniem (24/7), a rozwojem platformy Dobrze znasz środowisko DevOps (CI/CD, konteneryzacja, IaC, monitoring) Masz doświadczenie w budowaniu kultury DevOps (automatyzacja, CI/CD, self-service) Orientujesz się w środowiskach chmurowych (preferowane Azure) oraz on-prem Masz doświadczenie w zarządzaniu budżetem IT lub kosztami chmurowymi Znasz język angielski na poziomie min. B2+ Mile widziane: Masz doświadczenie z Kubernetes Znasz ITIL/ITSM (proces incydentów, problem management) Masz doświadczenie w firmie produktowej z własną infrastrukturą O projekcie: Zespół DevOps automatyzuje oraz tworzy platformę w Grupie Pracuj. Pracujemy ściśle z zespołami produktowymi w celu usprawnienia procesu wytwarzania oprogramowania. Dbamy o utrzymanie testowych oraz produkcyjnych środowisk aplikacyjnych. Wszystko to odbywa się w środowisku obfitującym w wiele ciekawych narzędzi, zarówno tych pisanych wewnątrz, jak i zewnętrznych. Zakres obowiązków: Zarządzanie zespołem DevOps - planowanie pracy, stawianie celów, oceny okresowe Budowanie zespołu: rekrutacja, onboarding, rozwój kompetencji i ścieżek kariery Koordynowanie działań zespołu w obszarze utrzymania platformy (incydenty, dyżury 24/7, wsparcie III linii) Odpowiedzialność za dostępność, wydajność i rozwój platformy produkcyjnej Motywowanie zespołu i rozwiązywanie konfliktów - dbanie o atmosferę i zaangażowanie Reprezentowanie zespołu DevOps w organizacji oraz komunikacja z interesariuszami Koordynacja współpracy zespołu z zespołami produktowymi oraz bezpieczeństwa Zarządzanie budżetem zespołu oraz optymalizacja kosztów infrastruktury i chmury Współpraca z dostawcami zewnętrznymi i zarządzanie umowami/licencjami Planowanie rozwoju platformy i narzędzi Oferujemy: Zarządzanie zespołem Ekspertów DevOps Zawieramy umowę o pracę Pracę w trybie hybrydowym z nowoczesnego biura w centrum Warszawy przy Rondzie Daszyńskiego (minimum 1 dzień w tygodniu) W przypadku umowy o pracę dwukrotnie w ciągu roku wypłacamy premię, stanowiącą połowę miesięcznego wynagrodzenia brutto ("13nastka") Finansujemy prywatną opiekę medyczną Medicover z pakietem stomatologicznym Co miesiąc dodajemy punkty w kafeterii, dzięki czemu możesz zakupić Kartę Multisport lub vouchery na zakupy i wydarzenia kulturalne
DevOps Engineer with Python
DataArt
⚲ Wrocław, Łódź, Lublin, Warszawa, Kraków
14 000 - 20 800 PLN netto (B2B) | 11 000 - 17 000 PLN brutto (UoP)
Wymagania
- Python
- AWS
- Linux
- Terraform
Project overview The project extensively uses AWS cloud, Lambdas, containers, infrastructure as a code, SQL, and NoSQL storage, and the latest versions of languages and tools. Some teams are more product-oriented and build outstanding student-facing applications, and some teams own shared components and services used by other products. Position overview We are looking for a DevOps to help design, build, and operate scalable cloud-native systems. This role focuses primarily on AWS infrastructure, Infrastructure as Code (Terraform), and system observability and reliability. You will work with AWS Lambda, containerized services, and monitoring tools to ensure stable, secure, and cost-efficient operations.Python is used for automation and scripting tasks, together with core application development.You will collaborate closely with product teams to enable fast feature delivery and with platform teams to strengthen the organization’s cloud foundation. Responsibilities • Design, implement, and operate AWS-based infrastructure using Terraform • Build and maintain scalable serverless and containerized environments (Lambda, ECS, etc.) • Implement monitoring, logging, and alerting to ensure high system reliability and performance • Automate operational tasks and workflows using lightweight Python scripts • Apply best practices for security, availability, and cost optimization in cloud environments • Support multiple teams by providing stable and reusable cloud components • Collaborate closely with the client and internal stakeholders in an Agile environment Requirements • Strong experience with AWS services (Lambda, RDS, DynamoDB, networking, IAM, etc.) • Hands-on experience with Infrastructure as Code, preferably Terraform • Solid understanding of Linux-based systems and cloud networking fundamentals • Experience designing and maintaining monitoring and observability solutions (logs, metrics, alerts) • Intermediate Python skills for scripting and automation purposes • Experience working with SQL and NoSQL databases • Familiarity with Agile/SCRUM development processes • Good spoken English for daily communication with client-side team members
DevOps Engineer with Camunda (f/m/x)
Sii
⚲ Warszawa
Wymagania
- CI/CD
- Camunda
- Helm
- Jenkins
- Kubernetes
- Ansible
- Docker
- Argo CD
- Amazon AWS
- Microsoft Azure
We are currently looking for a DevOps Engineer who will join a project delivered for one of our clients. In this role, you will be part of a collaborative team working on scalable, workflow-driven solutions based on the Camunda platform. The project focuses on building and maintaining modern, cloud-based applications with a strong emphasis on automation, reliability, and continuous delivery. Your tasks • Design, build, and maintain CI/CD pipelines • Automate deployments and manage application releases • Manage and support cloud-based infrastructure • Monitor applications to ensure performance, stability, and availability • Collaborate with development teams to deploy and maintain applications • Troubleshoot issues, perform root cause analysis, and optimize systems • Continuously improve DevOps processes and tools Requirements • Minimum 3 years of professional experience in a DevOps role • Hands-on experience with the Camunda platform • Knowledge of containerization and orchestration technologies • Experience with CI/CD tools (e.g., Jenkins) • Familiarity with cloud platforms such as Azure or AWS • Previous work with Ansible and Ansible Tower • Willingness to work from the office at least 1 day per week • Fluent Polish required • Residing in Poland required
DevOps Engineer (Observability)
Link Group
⚲ Warszawa
21 840 - 24 360 PLN (B2B)
Wymagania
- Prometheus
- Grafana
- Thanos
- Mimir
- ELK Stack
- Loki
- Terraform
- Terragrunt
- Kubernetes
- Python (nice to have)
- Go (nice to have)
- CI/CD (nice to have)
- GitHub Actions (nice to have)
- Automated workflows (nice to have)
- Puppet (nice to have)
- SLIs (nice to have)
- SLOs (nice to have)
O projekcie: Join a high-performing, international team of six DevOps experts. This is not a "maintenance-only" role. You will have a seat at the table in designing, building, and scaling our next-generation observability and logging solutions from the ground up. We believe in "Attitude First." If you are an ambitious engineer who thrives on collaboration, knowledge sharing, and solving complex distributed systems challenges, we want to grow with you. Wymagania: - Observability Expert: Solid hands-on experience with Prometheus, Grafana, and scaling tools like Thanos or Mimir. - Logging Architect: Proven experience managing enterprise-grade logging platforms (ELK stack or Loki). - IaC Ninja: Strong proficiency in Terraform/Terragrunt to manage infrastructure. - Cloud Native: Deep understanding of Kubernetes and the complexities of metrics/logs/traces in distributed systems. - Language: Full proficiency in English for seamless global collaboration. Stand Out From The Crowd (Nice to Have) - Coding: Ability to automate and integrate using Python or Go. - CI/CD: Exposure to GitHub Actions and automated workflows. - Configuration Management: Experience with Puppet. - SRE Mindset: Understanding of Service Level Indicators (SLIs), Objectives (SLOs), and Error Budgets. Codzienne zadania: - Architect & Build: Design and implement end-to-end observability solutions, including metrics, logging, tracing, and advanced alerting. - Platform Excellence: Operate and optimize high-scale monitoring platforms (Prometheus, Mimir, Grafana) and ELK stack logging infrastructure. - Infrastructure as Code: Define and maintain all observability systems using Terraform and Terragrunt. - Reliability Engineering: Ensure the scalability and performance of our systems while supporting incident detection and root cause analysis (RCA). - Collaborate: Work across domains with a team that values mentoring, transparency, and collective problem-solving.
DevOps Engineer (AWS Migration)
NESS SOLUTION sp. z o.o.
⚲ Warszawa
17 000–19 000 zł brutto / mies.
Wymagania
- Docker
- Kubernetes
- AWS
- Terraform
- Ansible
- Jenkins
- python/bash/groovy/go
- Google Cloud Platform
- Azure
- Prometheus
- Grafana
Nasze wymagania: Minimum 3 lata doświadczenia na stanowisku DevOps lub Cloud Engineer. Biegła znajomość platformy AWS (certyfikacja AWS będzie istotnym atutem). Zaawansowana umiejętność obsługi Terraform (IaC). Doświadczenie w pracy z technologiami kontenerowymi (Docker, Kubernetes). Znajomość systemów Linux/Unix oraz języków skryptowych (Python, Bash, Groovy lub Go). Zrozumienie procesów CI/CD i narzędzi automatyzacji (np. Jenkins, Ansible). Bardzo dobra znajomość języka angielskiego. Mile widziane: Doświadczenie w pracy w środowiskach regulowanych (standardy PCI). Znajomość narzędzi monitoringu i logowania (Prometheus, Grafana, ELK). Podstawowa znajomość specyfiki aplikacji pisanych w języku Java (pod kątem konfiguracji potoków CI/CD). O projekcie: 🚀 Lokalizacja: Warszawa, Wola - blisko stacji metra. 🛠 Technologie: AWS, Terraform, Kubernetes, Java ecosystem Projekt: Budowa od zera nowej infrastruktury AWS i migracja dwóch kluczowych aplikacji. 🏝 On-call: Brak O projekcie Dołączasz do zespołu z jasnym, ambitnym celem: budujemy od zera nowe środowisko AWS dla dwóch kluczowych aplikacji i migrujemy je z ich obecnej infrastruktury. To projekt o dużej autonomii, gdzie Twoja wiedza o AWS i Terraform będzie fundamentem sukcesu. Zakres obowiązków: Infrastructure as Code: Projektowanie, wdrażanie i utrzymywanie infrastruktury w oparciu o Terraform. Migracja i wdrożenia: Odpowiedzialność za proces przenoszenia usług do nowego środowiska AWS oraz zapewnienie ich wysokiej dostępności. Automatyzacja CI/CD: Rozwój i utrzymanie potoków wdrożeniowych (pipelines) wspierających dostarczanie usług end-to-end. Współpraca techniczna: Bliska kooperacja z zespołami Dev i Ops w zakresie skalowalności aplikacji. Monitorowanie i Security: Nadzór nad wydajnością systemów oraz implementacja środków bezpieczeństwa (zgodnie ze standardami PCI). Oferujemy: Opieka medyczna, stomatologiczna i okulistyczna. Programy pomocy pracownikom. Polityki przyjazne rodzicom i rodzinie. Płatny urlop, w tym święta w Twoim regionie. Docenianie indywidualne i zespołowe. Pracę z nowoczesnymi platformami CI/CD i narzędziami automatyzacji. Wsparcie rozwoju zawodowego i certyfikacji. Kontakt z najnowszymi strategiami wdrożeń i technikami automatyzacji.
DevOps Engineer (AWS Migration Project)
Ness Solution
⚲ Warszawa
17 000 - 19 000 PLN brutto (UoP)
Wymagania
- CI/CD
- Terraform
- Linux
- Kubernetes
- AWS
- Jenkins
- Ansible
- Docker
- GCP
- Azure
DevOps Engineer (AWS Migration Project) ☁️💻 🏢 Tryb pracy: hybrydowy – 3 dni w tygodniu w biurze, Warszawa, Wola - blisko stacji metra. Warunki: 17 000-19 000 zł brutto, UoP 🛠 Technologie: AWS, Terraform, Kubernetes, Java ecosystem 🏝 On-call: Brak O projekcie Poszukujemy doświadczonego inżyniera chmurowego do zespołu klienta - światowego lidera w branży fintech / płatności elektronicznych, który zadba o stabilność, bezpieczeństwo i wydajność środowisk cloud. Dołączasz do zespołu z jasnym, ambitnym celem: budujemy od zera nowe środowisko AWS dla dwóch kluczowych aplikacji i migrujemy je z ich obecnej infrastruktury (również AWS). To projekt o dużej autonomii, gdzie Twoja wiedza o AWS i Terraform będzie fundamentem sukcesu. Twój zakres obowiązków • Infrastructure as Code: Projektowanie, wdrażanie i utrzymywanie infrastruktury w oparciu o Terraform. • Migracja i wdrożenia: Odpowiedzialność za proces przenoszenia usług do nowego środowiska AWS oraz zapewnienie ich wysokiej dostępności. • Automatyzacja CI/CD: Rozwój i utrzymanie potoków wdrożeniowych (pipelines) wspierających dostarczanie usług end-to-end. • Współpraca techniczna: Bliska kooperacja z zespołami Dev i Ops w zakresie skalowalności aplikacji. • Monitorowanie i Security: Nadzór nad wydajnością systemów oraz implementacja środków bezpieczeństwa (zgodnie ze standardami PCI). Kogo szukamy? • Minimum 3 lata doświadczenia na stanowisku DevOps lub Cloud Engineer. • Biegła znajomość platformy AWS (certyfikacja AWS będzie istotnym atutem). • Zaawansowana umiejętność obsługi Terraform (IaC). • Doświadczenie w pracy z technologiami kontenerowymi (Docker, Kubernetes). • Znajomość systemów Linux/Unix oraz języków skryptowych (Python, Bash, Groovy lub Go). • Zrozumienie procesów CI/CD i narzędzi automatyzacji (np. Jenkins, Ansible). Mile widziane (Nice to have) • Doświadczenie w pracy w środowiskach regulowanych (standardy PCI). • Znajomość narzędzi monitoringu i logowania (Prometheus, Grafana, ELK). • Podstawowa znajomość specyfiki aplikacji pisanych w języku Java (pod kątem konfiguracji potoków CI/CD). Proces rekrutacyjny – szanujemy Twój czas: • Screening (15 min): Krótka rozmowa zapoznawcza z Managerem. • Techniczny Deep Dive (1h): Sprawdzamy Twoje skille w praktyce (wywiad techniczny), zdalnie. • Final Round (1h): Spotkanie z przełożonymi, zdalnie. Co oferujemy: • Długofalową współpracę z naszym stałym klientem z branży płatności w ramach Umowy o Pracę. • Pakiet benefitów - urlop, ubezpieczenie, Opiekę Medyczną, Multisport, Szkolenia i wiele innych. Chętnie porozmawiamy i opowiemy więcej o projekcie - zapraszamy do aplikowania!
DevOps Engineer
ITEAMLY SPÓŁKA Z OGRANICZONĄ ODPOWIEDZIALNOŚCIĄ
⚲ Kraków
13 500 - 18 000 zł net (+ VAT)
Wymagania
- AWS
- Docker
- ECS/EKS
- Terraform
- Ansible
- Jenkins
Wymagania: - Hands-on experience with AWS and container orchestration (ECS/EKS, Docker). - Proficiency with infrastructure automation tools (Terraform, Ansible). - Solid experience with CI/CD tools, ideally Jenkins. - Strong understanding of monitoring and observability practices. - Excellent communication and teamwork skills. - Analytical mindset with a problem-solving approach and initiative. O firmie: - As Venture Building, we create profitable technology companies from scratch. We collaborate with partners who share similar aspirations and are open to leveraging the potential of their networks, cultivated over years. We regularly launch new projects founded on long-term, amicable relationships with our associates. Zakres obowiązków: - Design, implement, and maintain cloud infrastructure (primarily AWS). - Develop and manage CI/CD pipelines to streamline deployments. - Automate infrastructure provisioning and configuration using IaC tools. - Monitor systems to ensure stability, performance, and security. - Collaborate with developers and QA to debug, optimize, and deploy applications. - Proactively identify improvements and optimize costs and system efficiency. Oferujemy: - Opportunity to work in a dynamic, technology-driven team. - Exposure to modern cloud technologies and scalable infrastructure projects. - Competitive employment package and benefits. - 26 days of annual leave. - Fully remote work
DevOps Engineer
ITEAMLY SPÓŁKA Z OGRANICZONĄ ODPOWIEDZIALNOŚCIĄ
⚲ Kraków
13 500–18 000 zł netto (+ VAT) / mies.
Wymagania
- AWS
- Docker
- ECS/EKS
- Terraform
- Ansible
- Jenkins
Nasze wymagania: Hands-on experience with AWS and container orchestration (ECS/EKS, Docker). Proficiency with infrastructure automation tools (Terraform, Ansible). Solid experience with CI/CD tools, ideally Jenkins. Strong understanding of monitoring and observability practices. Excellent communication and teamwork skills. Analytical mindset with a problem-solving approach and initiative. O projekcie: We are seeking a skilled DevOps Engineer to support the development and maintenance of modern cloud infrastructure. The ideal candidate has practical experience with AWS, infrastructure automation, and CI/CD pipelines, and excels at collaborating with development and QA teams. Zakres obowiązków: Design, implement, and maintain cloud infrastructure (primarily AWS). Develop and manage CI/CD pipelines to streamline deployments. Automate infrastructure provisioning and configuration using IaC tools. Monitor systems to ensure stability, performance, and security. Collaborate with developers and QA to debug, optimize, and deploy applications. Proactively identify improvements and optimize costs and system efficiency. Oferujemy: Opportunity to work in a dynamic, technology-driven team. Exposure to modern cloud technologies and scalable infrastructure projects. Competitive employment package and benefits. 26 days of annual leave. Fully remote work
DevOps Engineer
iTeamly
⚲ Kraków, Wrocław, Łódź, Gdańsk, Warszawa
13 500 - 18 000 PLN netto (B2B)
Wymagania
- AWS
- Terraform
- Ansible
- Docker
We are seeking a skilled DevOps Engineer to support the development and maintenance of modern cloud infrastructure. The ideal candidate has practical experience with AWS, infrastructure automation, and CI/CD pipelines, and excels at collaborating with development and QA teams. ✅ Your responsibilities: • Design, implement, and maintain cloud infrastructure (primarily AWS). • Develop and manage CI/CD pipelines to streamline deployments. • Automate infrastructure provisioning and configuration using IaC tools. • Monitor systems to ensure stability, performance, and security. • Collaborate with developers and QA to debug, optimize, and deploy applications. • Proactively identify improvements and optimize costs and system efficiency. 🧠 Our requirements: • Hands-on experience with AWS and container orchestration (ECS/EKS, Docker). • Proficiency with infrastructure automation tools (Terraform, Ansible). • Solid experience with CI/CD tools, ideally Jenkins. • Strong understanding of monitoring and observability practices. • Excellent communication and teamwork skills. • Analytical mindset with a problem-solving approach and initiative. 🌟 What we offer: • Opportunity to work in a dynamic, technology-driven team. • Exposure to modern cloud technologies and scalable infrastructure projects. • Competitive employment package and benefits. • Fully remote work
DevOps / Platform Engineer (AWS)
Upvanta sp. z o.o.
⚲ Poznań, Warszawa, Wrocław, Gdańsk, Kraków
1 000 - 1 200 PLN/dzień netto (B2B)
Wymagania
- AWS
- DevOps
- Jenkins
- Bash
- Terraform
- Kubernetes
O projekcie Dołączysz do zespołu odpowiedzialnego za budowę i rozwój wewnętrznej platformy deweloperskiej, której celem jest uproszczenie pracy zespołów developerskich z infrastrukturą chmurową. Platforma umożliwia samoobsługowe provisionowanie zasobów, redukując potrzebę angażowania zespołów DevOps w codzienne operacje. Zakres obowiązków • Projektowanie i rozwój platformy self-service dla zespołów developerskich • Automatyzacja provisioning’u infrastruktury w oparciu o pipeline’y Jenkins (Groovy) oraz Terraform • Tworzenie i utrzymanie chartów Helm do wdrażania aplikacji na Kubernetes (EKS) • Współpraca przy projektowaniu rozwiązań infrastrukturalnych w środowisku AWS • Praca z komponentami takimi jak Nginx, CloudFront, S3, Lambda, Kafka, Cloudflare • Optymalizacja i standaryzacja procesów związanych z zarządzaniem infrastrukturą Wymagania • Doświadczenie na stanowisku DevOps / Platform Engineer • Bardzo dobra znajomość AWS • Praktyczne doświadczenie z Kubernetes (EKS) • Znajomość Jenkins (tworzenie pipeline’ów w Groovy) • Doświadczenie z Terraform (IaC) • Umiejętność pisania skryptów w Bash / Groovy • Zrozumienie architektury systemów i infrastruktury chmurowej Mile widziane • Doświadczenie w budowie platform developerskich (Internal Developer Platform) • Znajomość podejścia self-service / platform engineering • Doświadczenie z Helm i zarządzaniem aplikacjami w Kubernetes • Znajomość narzędzi CDN i edge (Cloudflare, CloudFront) Oferujemy • Długofalową współpracę (projekt z możliwością przedłużenia) • Pracę przy nowoczesnym stacku technologicznym • Duży wpływ na architekturę i rozwój platformy • Pracę w doświadczonym zespole technicznym
DevOps / Platform Engineer
Link Group
⚲ Remote
21 000 - 24 360 PLN (B2B)
Wymagania
- AWS
- Kubernetes
- EKS
- Jenkins
- CI/CD
- Automation pipelines
- Terraform
- Groovy
- Bash
- Nginx (nice to have)
- CloudFront (nice to have)
- S3 (nice to have)
- Lambda (nice to have)
- Kafka (nice to have)
- Cloudflare (nice to have)
- Problem-Solving (nice to have)
O projekcie: We are looking for a skilled DevOps / Platform Engineer to design and build an internal developer platform that streamlines how engineering teams interact with cloud infrastructure. The primary goal of this initiative is to reduce manual effort for operations teams by enabling self-service infrastructure provisioning and automation. Wymagania: - Strong experience with AWS as a primary cloud platform - Solid knowledge of Kubernetes (EKS) and containerized workloads - Hands-on experience with Jenkins for CI/CD and automation pipelines - Proficiency in Terraform for infrastructure as code - Scripting skills in Groovy and Bash - Good understanding of modern infrastructure components such as reverse proxies, CDNs, object storage, serverless computing, and messaging/streaming platforms Nice to Have - Experience with tools and technologies such as Nginx, CloudFront, S3, Lambda, Kafka, or Cloudflare - Experience building internal developer platforms or platform engineering practices - Strong problem-solving skills and ability to work across distributed systems Codzienne zadania: - Design, build, and maintain a self-service platform that allows developers to provision and manage cloud resources independently - Automate infrastructure workflows using Jenkins pipelines written in Groovy, orchestrating Terraform code - Develop and manage Helm charts for deploying applications on Kubernetes clusters (EKS) - Collaborate across teams to design scalable, secure, and efficient infrastructure solutions - Work across the full infrastructure stack, including web servers, CDN, storage, serverless, and streaming systems - Continuously improve platform reliability, usability, and automation capabilities
DevOps (Camunda)
BlueSoft
⚲ Warszawa
Wymagania
- Camunda
- Docker
- Kubernetes
Opis stanowiska: Dla klienta z sektora finansowego poszukiwana jest osoba na stanowisko DevOps Engineer (Mid), która wesprze zespół odpowiedzialny za rozwój i utrzymanie nowoczesnych systemów opartych o architekturę mikroserwisową. Kluczowym elementem środowiska jest Camunda, dlatego doświadczenie w pracy z tym narzędziem jest niezbędne. Wymagania: • Minimum 3 lata doświadczenia w obszarze DevOps • Minimum 2 lata doświadczenia w pracy z Camundą (preferowana wersja 8) • Znajomość technologii i narzędzi:• Docker • Kubernetes • Helm • ArgoCD • Doświadczenie z CI/CD (np. Jenkins) • Znajomość języka angielskiego na poziomie min. B2 Oferujemy: 👨💻 Współpracę w zespole interdyscyplinarnych ekspertów i entuzjastów technologii, zainspirowanych pracą w płaskiej hierarchii;📈 Stałą współpraca w pełnym wymiarze godzin w niezwykłej firmie, która rozwija się dynamicznie od 20 lat;🚀 Szansę na rozwój w różnorodnych projektach IT, realizowanych dla bardzo znanych i rozpoznawalnych klientów- liderów w swoich branżach;🥇 Możliwość pracy z najnowszymi technologiami w oparciu o najwyższe standardy developmentu;⌚️ Elastyczny czas i hybrydowy lub w pełni zdalny model pracy (w różnych zespołach funkcjonują różne ustalenia, nie narzucamy żadnych opcji);☁ Szereg inicjatyw wspierających dzielenie się wiedzą i rozwój, takich jak: szkolenia wewnętrzne, gildie tematyczne, możliwość szkolenia i certyfikacji w zakresie Cloud (partnerstwa technologiczne w czterech chmurach);📣 Możliwość nauki języka angielskiego lub niemieckiego online oraz zespołowe lekcje języka angielskiego;🏋️♂️ Możliwość dołączenia do Multisport i ubezpieczenia medycznego;⛵️ Morskie przygody – z nami popływasz po Morzu Karaibskim, Śródziemnym lub Północnym (udział w rejsach firmowych jest dofinansowany przez firmę).
Data Engineering Tech Lead
Lingaro
⚲ Warszawa
Wymagania
- Azure
- Python
- SQL
Growth through diversity, equity, and inclusion. As an ethical business, we do what is right — including ensuring equal opportunities and fostering a safe, respectful workplace for each of us. We believe diversity fuels both personal and business growth. We're committed to building an inclusive community where all our people thrive regardless of their backgrounds, identities, or other personal characteristics. Tasks: • Provide leadership and guidance to the data engineering team, including mentoring, coaching, and fostering a collaborative work environment. Set clear goals, assign tasks, and manage resources to ensure successful project delivery. Work closely with developers to support them and improve data engineering processes. • Support team members with troubleshooting and resolving complex technical issues and challenges. • Provide technical expertise and direction in data engineering, guiding the team in selecting appropriate tools, technologies, and methodologies. Stay updated with the latest advancements in data engineering and ensure the team follows best practices and industry standards. • Collaborate with stakeholders to understand project requirements, define scope, and create project plans. • Support project managers to ensure that projects are executed effectively, meeting timelines, budgets, and quality standards. Monitor progress, identify risks, and implement mitigation strategies. • Act as a trusted advisor for the customer. • Oversee the design and architecture of data solutions, collaborating with data architects and other stakeholders. Ensure data solutions are scalable, efficient, and aligned with business requirements. Provide guidance in areas such as data modeling, database design, and data integration. • Align coding standards, conduct code reviews to ensure proper code quality level. • Identify and introduce quality assurance processes for data pipelines and workflows. • Optimize data processing and storage for performance, efficiency and cost savings. • Evaluate and implement new technologies to improve data engineering processes on various aspects (CICD, Quality Assurance, Coding standards). • Act as main point of contact to other teams/contributors engaged in the project. • Maintain technical documentation of the project, control validity and perform regular reviews of it. • Ensure compliance with security standards and regulations. What We're Looking For: • Minimum of 5 years of experience in data engineering or a related field. • Strong technical skills in data engineering, including proficiency in programming languages such as Python, SQL and Scala. • Familiarity with Azure cloud platform and services, and experience in implementing data solutions in a cloud environment. • Expertise in working with various data tools and technologies, such as ETL frameworks, data pipelines, and data warehousing solutions. • Proven experience in leading and managing a team of data engineers, providing guidance, mentorship, and technical support. • In-depth knowledge of data management principles and best practices, including data governance, data quality, and data integration. • Strong project management skills, with the ability to prioritize tasks, manage timelines, and deliver high-quality results within designated deadlines. • Excellent problem-solving and analytical skills, with the ability to identify and resolve complex data engineering issues. • Knowledge of data security and privacy regulations, and the ability to ensure compliance within data engineering projects. • Excellent communication and interpersonal skills, with the ability to effectively collaborate with cross-functional teams, stakeholders, and senior management. • Continuous learning mindset, staying updated with the latest advancements and trends in data engineering and related technologies. • A bachelor's or master's degree in Computer Science, Information Systems, or a related field is typically required. Additional certifications in data integration tools or platforms are advantageous. Missing one or two of these qualifications? We still want to hear from you! If you bring a positive mindset, we'll provide an environment where you feel valued and empowered to learn and grow. Offer: • Stable employment. On the market since 2008, 1500+ talents currently on board in 7 global sites. • “Office as an option” model. You can choose to work remotely or in the office. • Workation. Enjoy working from inspiring locations in line with our workation policy. • Great Place to Work® certified employer. • Flexibility regarding working hours and your preferred form of contract. • Comprehensive online onboarding program with a “Buddy” from day 1. • Cooperation with top-tier engineers and experts. • Unlimited access to the Udemy learning platform from day 1. • Certificate training programs. Lingarians earn 500+ technology certificates yearly. • Upskilling support. Capability development programs, Competency Centers, knowledge sharing sessions, community webinars, 110+ training opportunities yearly. • Grow as we grow as a company. 76% of our managers are internal promotions. • A diverse, inclusive, and values-driven community. • Autonomy to choose the way you work. We trust your ideas. • Create our community together. Refer your friends to receive bonuses. • Activities to support your well-being and health. • Plenty of opportunities to donate to charities and support the environment. • Modern office equipment. Purchased for you or available to borrow, depending on your location.
Cybersecurity Architect
OChK
⚲ Warszawa
20 000 - 24 000 PLN brutto (UoP)
Wymagania
- Git
- LogManagement
- Secret Managment
- Bash
- Hashicorp Vault
- SIEM
- Linux
- fastapi
- PAM
- Python
OChK to wiodący dostawca usług chmurowych w Polsce, z ugruntowanym doświadczeniem w rozwiązaniach z obszaru zaawansowanej analityki danych, sztucznej inteligencji i cyberbezpieczeństwa. Bogate portfolio usług dodanych konsekwentnie rozwija zarówno na autorskiej Platformie OChK, jak i na platformach chmurowych strategicznych partnerów: Google i Microsoft. Eksperci OChK tworzą skalowalne i bezpieczne środowiska, zgodnie z najlepszymi praktykami i standardami rynkowymi, wspierając cele biznesowe klientów na każdym etapie wdrażania chmury. Świadczą kompleksowe usługi: od strategii i migracji, przez zarządzanie środowiskiem IT i utrzymanie, budowę aplikacji i rozwiązań data/AI, po optymalizację, automatyzację, bezpieczeństwo i wsparcie w zakresie zgodności wdrażanych rozwiązań z regulacjami. Z usług OChK korzysta blisko 400 organizacji z niemal wszystkich sektorów, w tym instytucje publiczne, firmy prywatne, korporacje, podmioty nadzorowane i startupy Aktualnie do naszego zespołu Cybersecurity szukamy: Cybersecurity Architect Miejsce pracy: Warszawa/zdalnie Rodzaj umowy: UoP/B2B Wynagrodzenie: 20.000-24.000 Twój zakres obowiązków: • uczestniczenie w projektach realizowanych na Platformie OChK - autorskiej chmurze publicznej OChK, • tworzenie i rozwój architektury bezpieczeństwa, w tym definiowanie standardów, polityk i rekomendowanych rozwiązań zgodnie z wymaganiami biznesowymi i regulacyjnymi, • prowadzenie warsztatów i konsultacji z zakresu bezpieczeństwa chmury dla zespołów technicznych i biznesowych, • opracowywanie dokumentacji architektonicznej, analitycznej i rekomendacji w obszarze bezpieczeństwa, • inicjowanie i projektowanie zmian w architekturze systemów informatycznych w celu minimalizacji ryzyk i podnoszenia poziomu bezpieczeństwa, • opiniowanie nowych produktów, usług oraz zmian w istniejących rozwiązaniach pod kątem zgodności z wymaganiami bezpieczeństwa oraz dobrymi praktykami rynkowymi, • nadzór nad implementacją wymagań bezpieczeństwa w procesie wdrażania nowych systemów oraz modyfikacji istniejących rozwiązań, • współpraca z zespołami infrastruktury, developmentu, compliance oraz SOC w zakresie identyfikacji i adresowania ryzyk bezpieczeństwa. Nasze wymagania: • doświadczenie w projektowaniu i zarządzaniu mechanizmami kryptograficznymi w środowiskach hybrydowych, • praktyczne doświadczenie w zabezpieczaniu aplikacji, danych oraz infrastruktury w modelach cloud i on-premise, • znajomość narzędzi i rozwiązań klasy SIEM, LogManagement, PAM, Secret Managment (np. HashiCorp Vault),wysoko rozwinięte zdolności analityczne oraz umiejętność identyfikacji i oceny ryzyka bezpieczeństwa, • znajomość systemu operacyjnego Linux, • znajomość zagadnień konteneryzacji aplikacji, • umiejętność programowania w Python3, znajomość FastAPI, • umiejętność pisania skryptów Bash, etc, • znajomość systemu kontroli wersji Git, • doświadczenie w budowaniu procesów DevOps/DevSecOps, • znajomość standardów/protokołów REST, OAuth2, OpenID Connect, AAA, OpenAPI/Swagger, JWT, • znajomość języka angielskiego na poziomie min. B2, umożliwiająca pracę z dokumentacją techniczną. Mile widziane: • doświadczenie we współpracy z klientami z sektorów regulowanych, w szczególności w kontekście wymagań nadzorczych dotyczących przetwarzania danych w chmurze, • znajomość języka Go, Terraform oraz systemów klasy Key Management Services (KMS), • certyfikaty branżowe potwierdzające kompetencje w obszarze bezpieczeństwa i architektury rozwiązań chmurowych. W OChK: • cenimy proaktywność i inicjatywę własną, dlatego wspieramy autonomię w podejmowaniu decyzji, • budujemy kulturę organizacyjną na wartościach takich jak profesjonalizm, współodpowiedzialność i wzajemny szacunek, • pracujemy zadaniowo w trybie hybrydowym lub zdalnym, • przykładamy dużą wagę do efektywnego onboardingu, podczas którego w luźnej atmosferze i przy pełnym wsparciu Twojego CloudBuddiego poznajesz zespół, firmę i swoje obowiązki, • inwestujemy w Twój rozwój poprzez finansowanie szkoleń i certów, • od pierwszego dnia pracy udostępniamy Ci platformy edukacyjne Google i Microsoft, • pracujemy w zwinnym środowisku pracy, z wykorzystaniem aplikacji zwiększających efektywność, takich jak Google Workspace, Slack, GitHub, Jira, • oferujemy prywatną opiekę medyczną, • umożliwiamy ci przystąpienie do ubezpieczenia grupowego na preferencyjnych warunkach, • pokrywamy koszt karty Multisport, • organizujemy i współfinansujemy naukę języka angielskiego, • lubimy się integrować podczas różnorodnych inicjatyw - firmowych i oddolnych, które pomagają nam się lepiej poznać i utrzymać dobrą atmosferę współpracy.
Customer Success Engineer
Awareson Sp. z o.o.
⚲ Warszawa
160 - 210 PLN/h netto (B2B)
Wymagania
- Azure
- Site Reliability Engineering (SRE)
- Grafana
- Prometheus
- Observability
Customer Success Engineer (Observability / Platform) We are looking for a Customer Success Engineer to support the adoption and effective use of an internal observability and event management platform. This role requires a solid technical background and focuses on working closely with internal teams to design, implement, and optimize monitoring and alerting strategies across distributed systems. Responsibilities: • Support onboarding of applications and services into the observability platform (logs, metrics, alerting, event management) • Help define and implement monitoring strategies, including dashboards, alerting rules, and SLOs / SLIs • Analyze system behavior using telemetry data (logs, metrics, traces) to support troubleshooting and improve visibility • Translate business and operational requirements into technical observability solutions • Work with engineering teams to ensure proper instrumentation and integration with monitoring tools • Handle incident analysis and troubleshooting (L1/L2 support) using observability data • Contribute to improving platform processes, documentation, and onboarding standards • Support adoption of best practices in observability, reliability, and event management Requirements: • Experience in DevOps, SRE, Platform Engineering, or Technical Support (L2/L3) • Solid understanding of observability concepts: logs, metrics, traces, alerting, SLO/SLI • Hands-on experience with tools such as Grafana, Datadog, Prometheus (or similar) • Understanding of distributed systems and application architectures • Ability to troubleshoot issues using monitoring and telemetry data • Experience working with cloud environments (AWS, Azure, or GCP) • Experience interacting with stakeholders and translating requirements into technical solutions • Strong problem-solving skills and willingness to work on operational tasks Nice to have: • Experience with event/incident management platforms • Exposure to CI/CD pipelines or infrastructure automation • Familiarity with instrumentation practices (e.g. OpenTelemetry) • Experience in internal platform or developer tooling environment
Cloud Engineer
Caspian One
⚲ Cracow
1 200 - 1 600 PLN/dzień netto (B2B)
Wymagania
- DevOps
- PostgreSQL
- GCP
- Java
Senior Cloud DevOps Engineer – Krakow We are looking for an experienced Senior Cloud DevOps Engineer to join a global technology team working on a large scale business critical platform used across multiple regions. The environment is modern, fast moving and genuinely engineering led, with a strong focus on automation, cloud adoption and continuous delivery. This role suits someone who enjoys solving complex infrastructure challenges, improving DevOps practices, and working closely with development and SRE teams to keep a high availability platform running smoothly. What you will be doing • Leading and evolving DevOps, infrastructure and tooling capabilities across cloud and on premise environments • Driving automation and Infrastructure as Code to reduce manual effort and improve consistency • Enhancing CI CD pipelines and deployment processes for a high volume engineering organisation delivering hundreds of changes each month • Supporting and optimising production and non production systems to ensure performance, reliability and scalability • Working closely with developers, SRE, DBA and platform teams to troubleshoot issues and strengthen engineering standards • Contributing to architectural discussions, engineering improvements and broader platform modernisation initiatives Key tech environment • GCP including CloudSQL, GKE, • Kubernetes and container platforms • Terraform and Infrastructure as Code tooling • CI CD using Jenkins Maven and Ansible • Java based microservices with Java 11 and Spring Boot • Python and Shell scripting • Linux fundamentals
Azure DevOps Engineer (Observability)
Lingaro
⚲ Warszawa
Wymagania
- Azure DevOps
- Grafana
- Prometheus
- Kubernetes
- Docker
- Terraform
- CI/CD
Growth through diversity, equity, and inclusion. As an ethical business, we do what is right — including ensuring equal opportunities and fostering a safe, respectful workplace for each of us. We believe diversity fuels both personal and business growth. We're committed to building an inclusive community where all our people thrive regardless of their backgrounds, identities, or other personal characteristics. Responsibilities: • Maintain observability platform and introduce observability on new projects • Monitoring infrastructure (Creating Grafana Dashboards, Alerts, Collecting Prometheus Metrics) • Implement automated management features, such as performance monitoring, diagnostics and failover • Configuring implemented solutions in accordance with system security processes • Managing CI (continuous integration) systems and pipelines. • Designing and implementing infrastructure Requirements: • 5+ years of DevOps experience • Hands-on with Azure cloud services (PaaS & IaaS) • Strong experience with Grafana, Prometheus • Experience building CI/CD pipelines (GitHub Actions, Jenkins) • Kubernetes and Docker experience • Hands-on experience with Terraform • Administration of Linux & Windows VMs • Strong understanding of networking principles • Scripting skills: Python, Bash, PowerShell • Familiarity with static code analysis tools (SonarQube, SNYK) • Managing and securing Azure managed databases (MS SQL, PostgreSQL) • Developing and maintaining patch management processes • Databricks workspace administration – a plus Missing one or two of these qualifications? We still want to hear from you! If you bring a positive mindset, we'll provide an environment where you feel valued and empowered to learn and grow. We offer: • Stable employment. On the market since 2008, 1500+ talents currently on board in 7 global sites. • “Office as an option” model. You can choose to work remotely or in the office. • Workation. Enjoy working from inspiring locations in line with our workation policy. • Great Place to Work® certified employer. • Flexibility regarding working hours and your preferred form of contract. • Comprehensive online onboarding program with a “Buddy” from day 1. • Cooperation with top-tier engineers and experts. • Unlimited access to the Udemy learning platform from day 1. • Certificate training programs. Lingarians earn 500+ technology certificates yearly. • Upskilling support. Capability development programs, Competency Centers, knowledge sharing sessions, community webinars, 110+ training opportunities yearly. • Grow as we grow as a company. 76% of our managers are internal promotions. • A diverse, inclusive, and values-driven community. • Autonomy to choose the way you work. We trust your ideas. • Create our community together. Refer your friends to receive bonuses. • Activities to support your well-being and health. • Plenty of opportunities to donate to charities and support the environment. • Modern office equipment. Purchased for you or available to borrow, depending on your location.
Ansible Engineer / DevOps Automation Engineer
Cyclad
⚲ Warszawa, Kraków
Wymagania
- Ansible
- DevOps
- Cloud
- Python
- CI/CD
In Cyclad we work with top international IT companies in order to boost their potential in delivering outstanding, cutting edge technologies that shape the world of the future. Currently, for our client, we are looking for a skilled Ansible Engineer / DevOps Automation Engineer responsible for designing, implementing, and maintaining automated infrastructure solutions using Ansible and complementary DevOps tools. In this role, you will manage Ansible environments (AWX/AAP), develop and review automation playbooks, support production systems, optimize CI/CD processes, and integrate automation with cloud platforms. You will collaborate with cross-functional teams to ensure reliable, scalable, and secure deployments. Project information: • Experience level: 4+ years • Office locations: Cracow or Warsaw (presence required 8 days per month) • Work model: Hybrid (2 days in the office, 3 days remote) • Type of employment: B2B contract Project scope: • Design, implement, and maintain infrastructure automation using Ansible • Manage and maintain Ansible environments (AWX/AAP), including High Availability setups • Develop and review Ansible playbooks, roles, and modules • Support production systems and monitor performance • Troubleshoot complex technical issues • Optimize CI/CD pipelines • Integrate automation with cloud platforms (AWS, Azure, or GCP) Requirements: • Hands-on experience with Ansible administration (AWX or AAP) • Strong expertise in Ansible (playbooks, roles, modules, Ansible Galaxy) • Solid understanding of YAML • Proficiency in Python • Experience with Unix shell scripting / Bash • Good knowledge of Linux and Windows systems • Experience with CI/CD tools (e.g., Jenkins, GitHub, Nexus) • Experience working with cloud platforms (AWS, Azure, or GCP) • Hands-on experience with Kubernetes (k8s) • Experience with PostgreSQL Nice to have: • Experience integrating Ansible with HashiCorp Vault or Ansible Vault • Knowledge of configuration drift management and security compliance tools • Familiarity with additional DevOps or cloud-native tools We offer: • Private medical care with dental care (covering 70% of costs). Family package option possible • Multisport card (also for an accompanying person) • Life insurance • International work environment