Oferty pracy
Filtry
Expert DevOps Engineer (multi-cloud)
ING Bank Śląski S.A.
⚲ Katowice
16 000–26 000 zł brutto / mies.
Wymagania
- Windows Server
- Ansible
Nasze wymagania: masz doświadczenie w administracji rozległym środowiskiem Windows Server z wykorzystaniem takich narzędzi jak SCCM czy Ansible nie są ci obce rozwiązania i technologie dostawców chmur publicznych masz doświadczenie w automatyzacji (np. PowerShell, Terraform lub podobne) znasz podejście DevOps oraz narzędzia CI/CD potrafisz diagnozować problemy infrastrukturalne O projekcie: Nasz zespół realizuje zadania w środowisku multi-cloud dla całej grupy platform technologicznych. Poczynając od systemów operacyjnych takich jak WINDOWS Server czy Red Hat Linux przez serwery aplikacyjne, load balancery po platformy konteneryzacyjne. Realizujemy zadania zarówno w chmurze prywatnej ING jak i publicznej Google Cloud. Gwarantujemy dobrą atmosferę i współpracę w zespole oraz zdobywanie nowych kompetencji. Pracując z nami będziesz miał_a udział w transformacji technologicznej banku. Zakres obowiązków: projektujesz, utrzymujesz i wdrażasz rozwiązania z obszaru platform x86 w technologii Microsoft Windows Server utrzymujesz infrastrukturę w środowisku Multi Cloud (Google Cloud Platform, ING Private Cloud) automatyzujesz procesy administracyjne i wdrożenia (DevOps, skrypty, IaC) pracujesz z klientem wewnętrznym banku oraz specjalistami ING z innych krajów monitorujesz systemy, rozwiązujesz incydenty i dbasz o bezpieczeństwo infrastruktury Oferujemy: Pracę w dojrzałym biznesowo i technologicznie ponad 1000-osobowym Pionie IT, w którym razem z nami będziesz tworzyć rozwiązania dla nowoczesnej bankowości Kompleksowy rozwój kompetencji w zakresie wytwarzania oprogramowania, bezpieczeństwa IT, metodyki DevOps poprzez udział w projektach m.in. dotyczących migracji naszych środowisk i aplikacji do chmury publicznej Dostęp do platform edukacyjnych m.in. Microsoft ESI, Udemy Business, eTutor oraz dofinansowanie niezbędnych szkoleń technicznych Współpracę we wzajemnie uczących się społecznościach programistycznych, analitycznych, testerskich, Women in Tech, w tym pracę z mentorami i uczenie się na stażu towarzyszącym Pakiet medyczny, ubezpieczenie na życie, platformę Mindgram, Pracowniczy Program Emerytalny i Program Profilaktyki Onkologicznej Karty sportowe (Medicover Sport i Multisport) i kafeterię benefitową na punkty Dodatkowe dni wolne Program Poleceń Pracowniczych ze świadczeniem pieniężnym Możliwość skorzystania z autorskich kosztów uzyskania przychodu (do 70%) za wkład twórczy w rozwój oprogramowania Atrakcyjne konkursy z nagrodami i rabaty do wybranych sklepów
DevOps Engineer 💻
KMD Poland
⚲ Warszawa
140 - 160 PLN/h netto (B2B)
Wymagania
- DevOps
- IaC
- CI/CD
- Hyper-V
- Powershell
- Kubernetes
- Agile
Location: Warsaw, Inflancka 4A (Hybrid: 1 day per week in the office) Our Cloud Infrastructure and Operations team provides best cloud technologies, design, and architecture to support the move of KMD products to the cloud. As a Hybrid Cloud DevOps Engineer you will be part of a team whose primary focus will be to build, operate and maintain Azure Local solution (Azure hyperconverged infrastructure cluster) and underlaying datacenter infrastructure. We see DevOps as a strong collaboration between Development and Operations with both being part of the same agile product team. Sharing backlog and having common goals are key elements to Dev and Ops working together to create awesome products. In our team we are responsible for getting the most out of Azure and the DevOps principles. ✨After your morning coffee you will be: • Building, operating and maintaining complex hybrid cloud infrastructure for KMD products based on Azure Local and global Azure cloud. • Working on cutting edge technologies. • Acting as SME in the DevOps area and as a sparring partner for product teams. • Supporting cloudification across KMD products. • Suggesting new and improving existing services and accompanying processes and procedures. ✨We are a GOOD match if you: Professionally: • Have experience in Windows Server technologies. • Have worked with Hyper-V virtualization. • Have knowledge in managing physical server infrastructure. • Are familiar with Kubernetes cloud deployments (AKS/EKS/GKE). • Have DevOps and CI/CD Pipeline knowledge of principles and experience in implementation (preferably Azure DevOps). • Have used Infrastructure as a Code (IaC) automation including cloud deployments and on-premise (i.e. Terraform, Azure Resource Manager Templates, Ansible). • Have a knowledge of IP networking concepts and troubleshooting. • Have good understanding of Azure cloud core services. • Are experienced in PowerShell. • Are experienced in the DevOps/Agile/Scrum approach. Personally: • Are a self-driven person experienced in working in virtual teams. • Are a team player and consider knowledge sharing as a natural part of being professional. • Are fluent in Polish and English, both in speaking and writing. • Are skilled in problems solving. • Are ready to support planned activities outside business hours. ✨We will be a PERFECT match if you, additionally (aka "Nice to have"): • Have worked with Azure Local or Windows Server Failover Cluster. • Have experience in Windows Server security including Active Directory. • Have knowledge in MS SQL database administration. • Are familiar with Windows Containers. • Are Microsoft Certified in Azure technologies. • Are experienced with Linux. • Have worked with Kubernetes resources definitions templating tools (i.e. Helm, Kustomize). • Have used monitoring and visualization tools (i.e. Prometheus, Grafana, Telegraf). • Have used log aggregation and search engines (i.e. ELK Stack). • Know GitOps tools (i.e. ArgoCD, Flux). • Have knowledge of security concepts. • Have experience in configuring software and hardware-based load balancers, switches, routers, firewalls. • Are ready to work on 24/7 on-call shifts.
DevOps Engineer [M/F]
Stackmine
⚲ Remote
21 840 - 28 560 PLN (B2B)
Wymagania
- Jenkins
O projekcie: Celem projektu jest zapewnienie stabilnego, wydajnego i bezpiecznego działania platformy wspierającej proces wytwarzania i wdrażania oprogramowania. Platforma automatyzuje procesy związane z budowaniem, wdrażaniem oraz zarządzaniem artefaktami aplikacji, dzięki czemu zespoły developerskie mogą skupić się na tworzeniu logiki biznesowej. Wymagania: - min. 5 letnie doświadczenie w roli DevOps, Operations lub SRE, - bardzo dobra znajomość Kubernetes oraz środowisk cloud-native, - doświadczenie w automatyzacji procesów DevOps oraz pracy z Infrastructure as Code i podejściem GitOps, - doświadczenie z narzędziami do zarządzania wdrożeniami (np. ArgoCD, Harbor, Kyverno Operator), - praktyczna znajomość narzędzi Git i CI/CD (np. GitLab, Harness), - doświadczenie w pracy z narzędziami do planowania i dokumentacji (np. Jira, Jira Service Management, Confluence). - znajomość systemów Linux oraz umiejętność skryptowania (np. Bash, Python lub Go), - doświadczenie z narzędziami monitoringu i observability, - umiejętność analizy i optymalizacji procesów operacyjnych, - bardzo dobra znajomość języka angielskiego i języka polskiego (min. C1). Codzienne zadania: - zapewnienie stabilności i niezawodności środowisk produkcyjnych oraz testowych, - obsługa incydentów, analiza przyczyn źródłowych oraz rozwiązywanie złożonych problemów operacyjnych, - utrzymanie i optymalizacja wdrożeń opartych o Kubernetes, - zarządzanie i rozwój pipeline’ów CI/CD, - utrzymanie repozytoriów artefaktów oraz narzędzi wspierających proces wytwarzania oprogramowania, - automatyzacja procesów operacyjnych i wdrożeniowych, - monitorowanie systemów oraz rozwijanie obszaru observability, - wdrażanie dobrych praktyk bezpieczeństwa w codziennej pracy operacyjnej, - zarządzanie zmianami oraz aktualizacjami komponentów platformy, - identyfikowanie wąskich gardeł wydajnościowych oraz usprawnianie procesów deploymentu, - współpraca z zespołami projektowymi i interesariuszami w zakresie rozwiązywania problemów operacyjnych.
DevOps Engineer (Azure)
Connectis
⚲ Warszawa
28 500 - 32 500 PLN netto (B2B)
Wymagania
- AKS
- Windows Server
- AWS
- Jenkins
- CI/CD
- Ansible
- Azure
- Terraform
- MS Active Directory
- Infrastructure as code
Razem z naszym Klientem, liderem w branży paliwowo-energetycznej, poszukujemy osoby na stanowisko DevOps Engineer (Azure). Nasz klient rozwija cyfrową platformę dla biznesu, obejmującą płatności, zarządzanie flotą i usługi dodatkowe, takie jak tankowanie, myjnia czy fakturowanie, uzupełnioną o integracje API i nowoczesne rozwiązania mobilne oraz webowe. Projekt dotyczy stworzenia globalnej Platformy w oparciu o usługi chmurowe (głównie Azure), która dostarcza swoje funkcjonalności bez ograniczeń lokalizacyjnych, oferując innowacyjne rozwiązania klientom. 💡 TWOJA ROLA: • Zarządzanie usługami chmurowymi MS Azure obejmującymi storage, bezpieczeństwo, sieci i compute w modelach IaaS, PaaS oraz SaaS. • Współpraca z IT Architecture w celu zapewnienia zgodności rozwiązań z technologiczną roadmapą. • Odpowiedzialność za efektywność kosztową i optymalizację rozwiązań hostowanych w chmurze. • Proponowanie, projektowanie i realizacja proof of concept nowych rozwiązań chmurowych. • Wdrażanie usprawnień, standaryzacji i automatyzacji środowisk MS Azure. • Zapewnienie dostępności i wydajności krytycznej infrastruktury IT. • Udział w projektach lub prowadzenie drobnych prac rozwojowych. • Współtworzenie roadmap dla rozwiązań IT w chmurze. 🔍 CZEGO OCZEKUJEMY OD CIEBIE? • Minimum 5-letnie doświadczenie komercyjne na stanowisku DevOps Engineer lub poddobnym oraz doświadczenie w pracy z chmurą Azure. • Doświadczenie w provisioningu infrastruktury jako kodu (IaC) z wykorzystaniem narzędzi takich jak Terraform, CloudFormation, Ansible lub Chef. • Dobra znajomość konfiguracji i zarządzania storage, maszynami wirtualnymi (VM), kontenerami (AKS, ACI) oraz sieciami wirtualnymi. • Doświadczenie w zarządzaniu MS Azure, w tym Management Groups, Subscriptions i Resource Groups. • Znajomość procesów CI/CD i narzędzi takich jak Azure DevOps, GitHub Actions i Jenkins. • Doświadczenie w zarządzaniu tożsamościami i uprawnieniami (IAM, Azure AD). • Doświadczenie w zarządzaniu Microsoft Active Directory i Windows Server. • Wiedza z zakresu procesów ITSM i ITIL. Mile widziane: • Doświadczenie w pracy z innymi chmurami, takimi jak AWS i Oracle Public Cloud. • Doświadczenie w projektach migracji lub transformacji systemów do chmury. • Doświadczenie w zarządzaniu systemami Linux. ✨ OFERUJEMY: • Uczestnictwo w spotkaniach integracyjnych oraz meetupach technologicznych, umożliwiających dzielenie się wiedzą i doświadczeniem. • Wsparcie dedykowanej osoby kontaktowej z Connectis, dostępnej w celu pomocy w sprawach związanych z projektem. • Rozwój kariery i długofalową współpracę w firmie o ugruntowanej pozycji na rynku. • Praca w modelu hybrydowym Warszawa (2/3 dni z biura w tygodniu). • 5000 PLN za polecenie znajomych do naszych projektów. • Dostęp do najnowszych technologii i narzędzi pracy. • Szybki i zdalny proces. Dziękujemy za wszystkie zgłoszenia. Pragniemy poinformować, że skontaktujemy się wyłącznie z osobami zamieszkującymi Warszawę i okolice (do 100 km). 12737/RJ
DevOps Engineer (Azure)
DahliaMatic Sp. z o.o.
⚲ Warszawa
22 000 - 29 000 PLN (B2B)
Wymagania
- DevOps
- Azure
O projekcie: Poszukujemy osób posiadających ponad 2-3-letnie komercyjne doświadczenie w roli DevOps Engineer (w tym komeryjne doświadcznie w zakresie chmury Azure), do projektu związanego z rozwojem globalnej platformy retail opartej na rozwiązaniach chmurowych. Możliwa jest praca w pełni stacjonarna w biurze w Warszawie lub hybrydowa, do wyboru (w przypadku pracy hybrydowej w grę wchodzą minimum 2 wizyty w biurze w Warszawie na tydzień). Poszukiwane są wyłącznie osoby zamieszkujące Warszawę i okolice. Wymagania: Must have: - 2-3+ years of commercial experience in DevOps Engineer role (incl. commercial experience in Azure) - Graduate of IT studies - Residence in / close to Warsaw
DevOps Engineer
KMD Poland
⚲ Warsaw
23 520 - 26 880 PLN (B2B)
Wymagania
- DevOps
- HyperV
- Kubernetes
- CI/CD
- Azure DevOps
- Powershell
- Agile
O projekcie: Location: Warsaw, Inflancka 4A (Hybrid: 1 day per week in the office) B2B Contract, Targeted Salary: 140 – 160 PLN Net/Hour Our Cloud Infrastructure and Operations team provides best cloud technologies, design, and architecture to support the move of KMD products to the cloud. As a Hybrid Cloud DevOps Engineer you will be part of a team whose primary focus will be to build, operate and maintain Azure Local solution (Azure hyperconverged infrastructure cluster) and underlaying datacenter infrastructure. We see DevOps as a strong collaboration between Development and Operations with both being part of the same agile product team. Sharing backlog and having common goals are key elements to Dev and Ops working together to create awesome products. In our team we are responsible for getting the most out of Azure and the DevOps principles. Wymagania: We are a GOOD match if you: Professionally: - Have experience in Windows Server technologies. - Have worked with Hyper-V virtualization. - Have knowledge in managing physical server infrastructure. - Are familiar with Kubernetes cloud deployments (AKS/EKS/GKE). - Have DevOps and CI/CD Pipeline knowledge of principles and experience in implementation (preferably Azure DevOps). - Have used Infrastructure as a Code (IaC) automation including cloud deployments and on-premise (i.e. Terraform, Azure Resource Manager Templates, Ansible). - Have a knowledge of IP networking concepts and troubleshooting. - Have good understanding of Azure cloud core services. - Are experienced in PowerShell. - Are experienced in the DevOps/Agile/Scrum approach. Personally: - Are a self-driven person experienced in working in virtual teams. - Are a team player and consider knowledge sharing as a natural part of being professional. - Are fluent in Polish and English, both in speaking and writing. - Are skilled in problems solving. - Are ready to support planned activities outside business hours. We will be a PERFECT match if you, additionally (aka "Nice to have"): - Have worked with Azure Local or Windows Server Failover Cluster. - Have experience in Windows Server security including Active Directory. - Have knowledge in MS SQL database administration. - Are familiar with Windows Containers. - Are Microsoft Certified in Azure technologies. - Are experienced with Linux. - Have worked with Kubernetes resources definitions templating tools (i.e. Helm, Kustomize). - Have used monitoring and visualization tools (i.e. Prometheus, Grafana, Telegraf). - Have used log aggregation and search engines (i.e. ELK Stack). - Know GitOps tools (i.e. ArgoCD, Flux). - Have knowledge of security concepts. - Have experience in configuring software and hardware-based load balancers, switches, routers, firewalls. - Are ready to work on 24/7 on-call shifts. Codzienne zadania: - Building, operating and maintaining complex hybrid cloud infrastructure for KMD products based on Azure Local and global Azure cloud. - Working on cutting edge technologies. - Acting as SME in the DevOps area and as a sparring partner for product teams. - Supporting cloudification across KMD products. - Suggesting new and improving existing services and accompanying processes and procedures.
Data Engineer
Lyft
Wymagania
- Python
- SQL
- Hadoop
- Airflow
At Lyft, our purpose is to serve and connect. We aim to achieve this by cultivating a work environment where all team members belong and have the opportunity to thrive. The Safety and Customer Care (SCC) team at Lyft manages over 1.7 million monthly human and AI interactions and serves as Lyft's primary direct touchpoint with riders and drivers. We handle critical infrastructure that powers both human associates and AI agents to make riders and drivers feel safe and comfortable while riding or driving with Lyft, transforming every support interaction into a moment of genuine connection. As a Data Engineer on the SCC team, you will have ownership over the data modeling and pipelines that power SCC’s Associate and AI Agent Platform . Your efforts will be critical to the reliability of our pipelines, execution of third party data integrations, accurate reporting of agents performance, and efficiency improvements that can save millions of dollars / year. You will work cross-functionally to bridge Lyft's business goals with data engineering. Your efforts will allow access to business and user behavior insights, using huge amounts of Lyft data to fuel several teams such as Analytics, Data Science, Engineering, and many others. Responsibilities: * Owner of the core data pipeline, responsible for scaling up data processing flow to meet the rapid data growth at Lyft * Evolve data model and data schema based on business and engineering needs * Implement systems tracking data quality and consistency * Develop tools supporting self-service data pipeline management (ETL) * SQL and MapReduce job tuning to improve data processing performance * Write well-crafted, well-tested, readable, maintainable code * Participate in code reviews to ensure code quality and distribute knowledge * Collaborate cross-functionally with product, engineering, data science, and marketing teams to understand business problems and align on prioritization and solutions This role is fully remote in Ukraine, however, employees are required to live and work in Ukraine in order to be qualified for this position. #Remote
Cloud Support Engineer (GCP)
Scalo
⚲ Warszawa, Łódź
18 480 - 20 160 PLN (B2B)
Wymagania
- Terraform
- GCP
- Cloud
- Jenkins
- Bitbucket
- TeamCity
- DevOps
- Terragrunt (nice to have)
- Azure (nice to have)
- GitLab (nice to have)
- Artifactory (nice to have)
- Prometheus (nice to have)
- API (nice to have)
- Packer (nice to have)
O projekcie: Cześć! U nas znajdziesz to, czego szukasz - przekonaj się! Co dla Ciebie mamy: - Stabilna współpraca Wyzwania technologiczne i praca z nowoczesnymi rozwiązaniami. Internal Mobility - nie musisz zmieniać firmy, żeby zmienić projekt - Ambasadorzy SCALO Rozwijaj kompetencje techniczne i presalesowe, wpływaj na kierunek rozwoju organizacji i wspieraj innych - Budowanie marki osobistej Twórz wartościowe treści, dziel się wiedzą i doświadczeniem, występuj jako ekspert na wydarzeniach (np. ProMeet) - rozwijaj swoją markę w naturalny sposób - Benefit box Pełna opieka medyczna, karta MultiSport i szeroka oferta Motivizera - wybieraj benefity, które dbają o Twoje zdrowie, energię i codzienny komfort - Program poleceń Poleć znajomego z IT, a resztą zajmiemy się my – on dostaje nowy projekt, Ty bonus. Win–win w najczystszej postaci - Integracje i wydarzenia firmowe Wspólne wyjścia, firmowe eventy i okazje do poznania się poza projektami – bo dobre relacje tworzą najlepsze zespoły Wymagania: Ta oferta jest dla Ciebie, jeśli: - posiadasz co najmniej 1 rok doświadczenia w pracy z Terraform, - masz praktyczną znajomość rozwiązań chmurowych GCP, - znasz standardy dostępności i utrzymania rozwiązań publicznych zgodne z dokumentacją Cloud Standards, - posiadasz znajomość dostępnych usług w chmurze publicznej oraz umiejętność integrowania ich ze środowiskami on-premises, zwłaszcza w warstwie sieciowej i funkcjonalnej, - możesz pochwalić się praktyczną znajomością narzędzi do automatyzacji procesów (np. Jenkins, Bitbucket, TeamCity, Terraform), - praktyki DevOps i praca w Agile nie są Ci obce, - posiadasz dobry poziom języka angielskiego, - nie boisz się pracy z nowymi dla siebie technologiami; cenisz samodzielność, ale potrafisz też pracować w zespole. Mile widziane: - znajomość Terragrunt - certyfikaty potwierdzające zdobyte tytuły w obszarze Azure (np. AZ-103, AZ-104, AZ-303, AZ-304) lub Google (np. Associate Cloud Engineer, Professional Cloud Developer, Professional Data Engineer), - znajomość: GitLab, Artifactory, Nexus, Prometheus, API, Packer. Codzienne zadania: - doradzanie, projektowanie oraz tworzenie wysokopoziomowych rozwiązań systemów migrowanych lub wdrażanych do chmury publicznej, - obsługa zgłoszeń IT od użytkowników, - realizacja testów Proof of Concept (PoC) rozwiązań chmurowych, - prowadzenie szkoleń i warsztatów dotyczących obsługi chmury publicznej, - wsparcie przy wdrażaniu systemów do chmury, ze szczególnym uwzględnieniem koncepcji Infrastructure as Code (IaaC), - rozwój dokumentacji i bazy wiedzy dla użytkowników IT, - współpraca z zespołami DevOps i Architektami Chmurowymi, m. in. konsultacje dotyczące kierunków rozwoju rozwiązań chmurowych, - pomoc przy usuwaniu błędów oraz optymalizacji procesów automatyzacji wdrożeń przy użyciu narzędzi takich jak Terraform, - praca w modelu hybrydowym: 2 razy w miesiącu spotkania w biurze w Warszawie, - stawka do 120 zł/h przy B2B w zależności od doświadczenia.
Cloud Infrastructure Engineer
Ringier Axel Springer Tech
⚲ Kraków, Warszawa
12 400 - 20 600 PLN (B2B)
Wymagania
- Cloud
- AWS
- IaC
- Python
- Terraform
- CloudFormation
- Git
- GitHub
- GitHub Actions
- Google cloud platform (nice to have)
- Azure (nice to have)
O projekcie: W Cloud Governance Team tworzymy fundament infrastruktury i środowiska pracy dla ponad 300 inżynierów. Budujemy i rozwijamy platformę chmurową, na której powstają produkty używane przez ponad 40 milionów użytkowników na całym świecie. Pracujemy z nowoczesnym stackiem multi-cloud (AWS, Azure, GCP), automatyzacją w IaC i Pythonie oraz rozwiązujemy realne wyzwania inżynierskie: od bezpieczeństwa infrastruktury w skali enterprise, po Disaster Recovery i High Availability dla systemów działających globalnie. Jeśli chcesz mieć realny wpływ na infrastrukturę wykorzystywaną przez miliony użytkowników i współtworzyć platformę dla setek użytkowników — zapraszamy do naszego zespołu. Wymagania: Wymagania - AWS - znasz serwisy i wiesz, jak je używać w praktyce - Infrastructure as Code – Terraform, CloudFormation - Python - do zastosowań budowy skryptów, biblioteki m.in. boto3 - Git + GitHub - code review, CI/CD (GitHub actions) - "lifelong learner" mindset - technologie się zmieniają, Ty też powinieneś/aś się rozwijać. - Język angielski - min. B2 Nice to have: - Microsoft Azure - drugi kawałek multi-cloud puzzle - Google Cloud Platform - kompletny multi-cloud expert Codzienne zadania: - Budujesz i zarządzasz infrastrukturą - Terraform, CloudFormation i Terragrunt to Twoje narzędzia do automatyzacji wdrożeń i zarządzania chmurami. - Code review i mentoring - Opiniujesz rozwiązania kolegów z zespołu, dzielisz się wiedzą i uczysz się od innych. - Budujesz narzędzia monitorowania kosztów i zabezpieczeń pozwalające zespołom produktowym na samodzielne usuwanie podatności w infrastrukturze i dbanie o koszty. - Śledzisz trendy technologiczne - Wdrażasz nowe rozwiązania chmurowe, testujesz najnowsze serwisy AWS/Azure/GCP i dzielisz się wiedzą z zespołem. - Współpracujesz z innymi zespołami produktowymi przy integracjach i rozwiązywaniu problemów. - Dbasz o dokumentację - Tworzysz i aktualizujesz dokumentację techniczną i procesową. - Pracujesz w Scrumie – Przez aktywność podczas sprint planning, grooming, review, kreujesz przyszłość platformy. - Zapewniasz bezpieczeństwo - Implementujesz polityki security i plany disaster recovery, monitorujesz zagrożenia i reagujesz na incydenty.
Cloud & Infrastructure Operations Engineer
Scalo
⚲ Gdańsk
23 520 - 25 200 PLN (B2B)
Wymagania
- Cloud
- Script
O projekcie: Cześć! U nas znajdziesz to, czego szukasz - przekonaj się! Co dla Ciebie mamy: - Stabilna współpraca Wyzwania technologiczne i praca z nowoczesnymi rozwiązaniami. Internal Mobility - nie musisz zmieniać firmy, żeby zmienić projekt - Ambasadorzy SCALO Rozwijaj kompetencje techniczne i presalesowe, wpływaj na kierunek rozwoju organizacji i wspieraj innych - Budowanie marki osobistej Twórz wartościowe treści, dziel się wiedzą i doświadczeniem, występuj jako ekspert na wydarzeniach (np. ProMeet) - rozwijaj swoją markę w naturalny sposób - Benefit box Pełna opieka medyczna, karta MultiSport i szeroka oferta Motivizera - wybieraj benefity, które dbają o Twoje zdrowie, energię i codzienny komfort - Program poleceń Poleć znajomego z IT, a resztą zajmiemy się my – on dostaje nowy projekt, Ty bonus. Win–win w najczystszej postaci - Integracje i wydarzenia firmowe Wspólne wyjścia, firmowe eventy i okazje do poznania się poza projektami – bo dobre relacje tworzą najlepsze zespoły Wymagania: Ta oferta jest dla Ciebie, jeśli: - masz doświadczenie w pracy z systemami operacyjnymi i potrafisz diagnozować problemy systemowe, - znasz koncepcje technologii chmurowych oraz potrafisz rozwiązywać problemy w środowiskach cloud, - masz wiedzę z zakresu sieci w środowiskach korporacyjnych, - rozumiesz koncepcje maszyn wirtualnych oraz zarządzania infrastrukturą, - posiadasz doświadczenie w scriptingu lub automatyzacji, - dobrze odnajdujesz się w pracy z zespołem operacyjnym oraz w środowisku międzynarodowym, - bardzo dobrze znasz język angielski (min. B2/C1). Codzienne zadania: - praca w zespole Hosting Operations, odpowiedzialnym za utrzymanie i stabilność środowisk produkcyjnych naszych klientów, - współpraca z międzynarodowym zespołem działającym w modelu follow-the-sun – z lokalizacjami w Polsce (Gdańsk), Indiach (Bangalore) oraz Kanadzie (Waterloo), - realny wpływ na utrzymanie, rozwój oraz automatyzację infrastruktury, - udział w działaniach związanych z: monitorowaniem środowisk produkcyjnych, rozwiązywaniem incydentów infrastrukturalnych, wdrożeniami oraz Disaster Recovery, - możliwość rozwijania kompetencji w obszarach cloud, infrastruktury oraz automatyzacji operacji IT, - praca w modelu hybrydowym – pierwsze trzy miesiące stacjonarnie, następnie zdalnie z okazjonalnymi wizytami w biurze w Gdańsku, - stawka do 150 zł/h przy B2B w zależności od doświadczenia.
CI/CD Developer
Sea Machines Robotics Sp. z o.o.
⚲ Gdańsk
14 000 - 18 000 PLN (PERMANENT)
Wymagania
- BS
- CI/CD Pipelines
- Jenkins
- GitHub
- GitLab CI
- Docker
- C++
- Python
- Linux
- Bash
- Git
- AWS (nice to have)
- Testing (nice to have)
O projekcie: What we offer / Benefits - Office stocked with snacks & refreshments. - Team lunch every Wednesday - A chance to connect, unwind, and enjoy a meal together. - Hands-on impact – your code will power AI-driven maritime autonomy, used in actual vessel operations. - A disciplined engineering environment with room for experimentation and learning. - Flexible hours with 3 required office days in Gdańsk. - Private healthcare - Access to medical services for your well-being. - Sports & wellness allowance - We support an active lifestyle with subsidies for gym membership. - Eyewear subsidy. About our Company We’re a fast-growing team developing practical AI that helps ships and workboats operate remotely, make better decisions, and stay reliable in dynamic, high-stakes environments. The ocean transportation market is huge, under-digitized, and full of hard engineering problems: messy sensor data, constrained hardware, safety requirements, and systems that have to work offshore. Our products turn that complexity into usable tools for maritime operators, combining autonomy software with clear human-machine interaction. Based in Gdansk, Poland with headquarters in Boston, USA, we’re a crew of engineers, software developers, and autonomy scientists working together to ship technology that gets deployed in the field. If you’re someone who wants ownership, technical depth, and outcomes you can measure in the real world, this is the kind of challenge you’ll feel here with us. Apply and let’s get to know each other. Wymagania: - BS or MS in Computer Science, Computer Engineering, or related engineering field - 4+ years of experience designing and maintaining CI/CD pipelines for production systems - Strong experience with CI/CD platforms (Jenkins, GitHub Actions, GitLab CI, or similar) - Experience with containerization technologies (Docker) - Experience building pipelines for C++ and Python projects - Experience working in Linux environments - Experience with infrastructure automation and scripting (Bash, Python) - Experience with version control systems (Git) and branching strategies Nice-to-Have - Familiarity with cloud infrastructure (AWS or similar) - Experience supporting hardware-in-the-loop or simulation-based testing environments - Experience working with robotics or autonomous vehicle software stacks - Experience optimizing build performance and large-scale CI pipelines - Experience managing software releases and artifact versioning for deployed systems Codzienne zadania: - Design, implement, and maintain CI/CD pipelines for autonomy, perception, and platform software deployed on autonomous vessels - Build and maintain automated build, test, and deployment pipelines for C++, Python, and containerized services - Develop infrastructure to support automated simulation testing, hardware-in-the-loop testing, and regression validation - Maintain CI pipelines for code quality, static analysis, unit testing, integration testing, and system testing - Improve developer workflows for faster, safer software delivery - Develop infrastructure for remote software deployment and fleet updates - Work closely with autonomy, perception, networking, and DevOps teams to ensure reliable software integration - Develop scenarios in collaboration with the Marine Test Lead to embed in the CI/CD pipeline. - Monitor pipeline performance and continuously improve build speed, reliability, and test coverage
Application Maintenance Engineer
Fujitsu Poland Sp. z o.o
⚲ Łódź
Wymagania
- Linux
- Windows Server
- Docker
Nasze wymagania: Hands-on experience in application operations, maintenance, or support. Strong troubleshooting skills in Linux (openSUSE, SLES, or similar) and Windows Server environments. Practical experience with Docker or Podman for managing containerized applications. Understanding of network fundamentals, connectivity, and dependencies between services. Experience analyzing system and application logs to identify and resolve issues. Working knowledge of ServiceNow, Confluence, or similar enterprise systems. Ability to update and validate applications following defined change and release procedures. Strong communication skills and fluency in English (spoken and written). Soft SkillsAnalytical and curious – investigates beyond the surface to understand root causes. Structured and reliable – ensures that operational tasks follow process and documentation standards. Collaborative – works effectively with colleagues and shares insights across teams. Creative problem-solver – applies logic and initiative when encountering new issues. Service-minded – committed to stability, responsiveness, and quality in operations. Adaptable – comfortable working across diverse systems and environments. Zakres obowiązków: Application Developer is responsible for maintaining, configuring, updating, and troubleshooting enterprise applications across Linux and Windows environments. Execute maintenance, configuration, and application update tasks according to documented procedures. Perform advanced troubleshooting and analysis across application, OS, and container layers (Docker/Podman).Investigate and interpret application and system logs to identify root causes and propose corrective actions. Validate configurations and verify service functionality after releases or infrastructure changes. Apply independent judgement when resolving issues not fully covered by predefined instructions Collaborate with the Lead and peers to refine and improve maintenance documentation. Participate in change and release activities, including pre-checks, post-validation, and rollback coordination. Communicate findings clearly to the Lead and related teams (Development, QA, Infrastructure). Maintain accurate and up-to-date operational documentation.Contribute to process improvements and knowledge sharing within the maintenance team. Oferujemy: Health and well-being Private medical care. Group life insurance. 24/7 Employee Support Program. Flexibility at work Work from modern offices with parking or from home. Workation. Time to recharge 2 paid “happy hours” per month, adding up to 3 extra days off annually. An additional day off after using your holiday entitlement – up to 30 days of leave per year. Holiday allowance. Perks and benefits MyBenefit cafeteria. Sport cards. Growth opportunities Free e-learning platforms. Shared costs for professional training and certifications. Making an impact Responsible business initiatives.
AWS Platform Architect/ Cloud Engineer
Scalo
⚲ Gdańsk
30 240 - 31 920 PLN (B2B)
Wymagania
- AWS
- Terraform
- Ansible
- Shell
- DNS
- API
- PHP
O projekcie: Cześć! U nas znajdziesz to, czego szukasz - przekonaj się! Co dla Ciebie mamy: - Stabilna współpraca Wyzwania technologiczne i praca z nowoczesnymi rozwiązaniami. Internal Mobility - nie musisz zmieniać firmy, żeby zmienić projekt - Ambasadorzy SCALO Rozwijaj kompetencje techniczne i presalesowe, wpływaj na kierunek rozwoju organizacji i wspieraj innych - Budowanie marki osobistej Twórz wartościowe treści, dziel się wiedzą i doświadczeniem, występuj jako ekspert na wydarzeniach (np. ProMeet) - rozwijaj swoją markę w naturalny sposób - Benefit box Pełna opieka medyczna, karta MultiSport i szeroka oferta Motivizera - wybieraj benefity, które dbają o Twoje zdrowie, energię i codzienny komfort - Program poleceń Poleć znajomego z IT, a resztą zajmiemy się my – on dostaje nowy projekt, Ty bonus. Win–win w najczystszej postaci - Integracje i wydarzenia firmowe Wspólne wyjścia, firmowe eventy i okazje do poznania się poza projektami – bo dobre relacje tworzą najlepsze zespoły Wymagania: Ta oferta jest dla Ciebie, jeśli: - posiadasz doświadczenie w projektowaniu i budowie rozwiązań w chmurze publicznej (preferencyjnie AWS), - masz praktyczne doświadczenie w automatyzacji infrastruktury z wykorzystaniem Terraform, Ansible lub podobnych narzędzi, - posiadasz certyfikat AWS lub innej chmury publicznej (minimum poziom Associate), - posiadasz dobrą znajomość architektury sieciowej w środowiskach enterprise, - masz doświadczenie w rozwiązywaniu złożonych problemów technicznych w środowiskach chmurowych, - znasz shell scripting oraz podstawy pracy z bazami danych, DNS i API, - potrafisz analizować problemy systemowe i proponować długoterminowe rozwiązania architektoniczne, - dobrze komunikujesz się w języku angielskim (min. B2) i potrafisz współpracować w międzynarodowym środowisku. Codzienne zadania: - udział w międzynarodowym projekcie związanym z rozwojem i utrzymaniem platformy chmurowej AWS, - zarządzanie oraz rozwój AWS Landing Zone i standardów infrastruktury chmurowej, - automatyzacja infrastruktury i procesów operacyjnych z wykorzystaniem Infrastructure as Code (Terraform, Ansible), - współpraca z Solution Architectami oraz zespołami developerskimi przy wdrażaniu nowych komponentów, - zapewnianie niezawodności i bezpieczeństwa środowiska poprzez monitoring, analizę problemów i wdrażanie trwałych rozwiązań, - wdrażanie najlepszych praktyk w zakresie operacji chmurowych, bezpieczeństwa oraz optymalizacji kosztów, - praca w modelu hybrydowym- Gdańsk, - stawka do 190 zł/h B2B w zależności od doświadczenia.
Sr. DevSecOps Engineer- Reliability & Security
Smartsheet
Wymagania
- Kubernetes
- AWS
- Terraform
- Python
- Go
About Smartsheet For over 20 years, Smartsheet has helped people and teams achieve–well, anything. From seamless work management to smart, scalable solutions, we’ve always worked with flow. We’re building tools that empower teams to automate the manual, uncover insights, and scale smarter. But more than that, we’re creating space– space to think big, take action, and unlock the kind of work that truly matters. Because when challenge meets purpose, and passion turns into progress, that’s magic at work, and it’s what we show up for everyday. Automation is the key to creating highly reliable and secure large-scale software systems. Are you someone who engineers solutions to problems rather than simply fixing the same thing over and over again? Can you protect Smartsheet against attackers? We are looking for a Senior DevSecOps Engineer to join our global Security Operations team. In this critical role, you will be a leader in maturing our security and reliability posture by treating both as software engineering challenges. You will engineer and operate a highly reliable, scalable, and defensible production environment, directly impacting our ability to deliver a world-class service to our customers 24/7. This is a unique opportunity to blend deep expertise in Site Reliability Engineering (SRE) and modern Security Operations, working at the intersection of infrastructure, automation, and security to build a platform that is resilient and secure by design. You will work remotely from Bulgaria. What You Will Do - Engineer Secure and Resilient Infrastructure: Design, build, maintain, and improve secure, scalable, and highly available infrastructure in our multi-cloud environment (primarily AWS) using Infrastructure as Code (IaC) principles with tools like Terraform, Kubernetes, and Helm. - Automate Proactive Security: Engineer and automate threat detection, incident response, and vulnerability management processes. You will build the tools and workflows that allow us to respond to threats at machine speed. - Secure the Software Development Lifecycle: Architect and secure our CI/CD pipelines, integrating automated security tooling (SAST, DAST, SCA) to provide developers with fast, actionable feedback. - Master Container Security: Manage, operate, and secure our container orchestration platform (Kubernetes), implementing best practices for container security from the registry to runtime, including knowledge of hardening requirements such as CIS Benchmarks or DISA STIG. - Lead Incident Response: Act as a technical lead during security and reliability incidents, driving resolution and conducting blameless post-mortems to engineer preventative solutions. - Drive Automated Compliance: Implement and automate technical controls to ensure continuous compliance with frameworks such as FedRAMP, SOC 2, and ISO 27001. - Mentor and Lead: Serve as a subject matter expert for security and reliability, mentoring other engineers and championing a culture of operational excellence and security ownership across the organization. Get to Know Us: At Smartsheet, your ideas are heard, your potential is supported, and your contributions have real impact. You’ll have the freedom to explore, push boundaries, and grow beyond your role. We welcome diverse perspectives and nontraditional paths—because we know that impact comes from individuals who care deeply and challenge thoughtfully. When you’re doing work that stretches you, excites you, and connects you to something bigger, that’s magic at work. Let’s build what’s next, together.
Software Engineer
Aircall
⚲ Madrid
Wymagania
- Node.js
- TypeScript
- AWS
- React
- Python
Aircall Aircall is a unicorn, AI-powered customer communications platform used by 22,000+ companies worldwide to drive revenue, resolve issues faster, and scale customer-facing teams. We’re redefining customer communications by bringing voice, SMS, WhatsApp, and AI together into one seamless workspace. Our momentum comes from a simple idea: help teams work smarter, not harder. Aircall’s AI Voice Agent automates routine calls, AI Assist streamlines post-call work, and AI Assist Pro delivers real-time guidance so people can do their best work. The result is higher revenue, faster resolutions, and teams that scale with confidence. Aircall is headquartered in Paris, our European HQ, with a strong North American presence anchored in Seattle, our North American HQ, and teams across Madrid, London, Berlin, San Francisco, New York City, Sydney, and Mexico City. We’ve built a product customers love and a business that’s scaling quickly, backed by world-class investors and driven by rapid AI innovation across multiple product lines. At Aircall, you’ll join a company in motion. We’re ambitious, product-driven, and execution-focused, with visible impact, fast decisions, and real growth. How we work at Aircall: We’re customer-obsessed, data-driven, and focused on delivering meaningful outcomes. We value ownership, continuous learning, and thoughtful speed. If you thrive in a collaborative, fast-moving environment where trust and impact matter, you’ll feel at home here. Software Engineer As a Software Engineer, you will be responsible for the design, development, deployment, and operation of business-critical features that add customer value. Operational excellence, metrics, observability and best practices will be part of your day to day job. Quality, excellence, and agility drive our delivery processes and you will help us level these up. Using NodeJS and Typescript back-end stacks, appropriate for each requirement, but always hosted on AWS, we build added value and resilient services on top of voice and fully integrate with our customers’ business critical tools (CRM, Helpdesk, E-Commerce, …). Your mission @ Aircall: - Create, design, develop, test, and monitor your code in production - Collaborate with product managers, designers and other engineers to offer pragmatic solutions to real world problems and implement them end to end - Work with your team to design high quality, secure, and scalable solutions and be accountable for their execution - Obtain a good understanding of the business to provide relevant solutions to users and clients - Stay up-to-date on new technologies and architectures, demonstrate good judgment in their potential applications
Senior DevOps Engineer
Sofia Stars
⚲ Sofia
Wymagania
- Kubernetes
- Istio
- Calico
- Helm
- Terraform
Sofia Stars is an operational services company based in Sofia. We offer a range of solutions for online businesses, including R&D, Marketing, Customer Support, KYC, Risk, and Anti-Fraud services. With 300+ bright stars on our team, we deliver secure, reliable solutions with a touch of quality that shines. When you join us, you’ll be part of a place where ideas light up, and growth isn’t just a promise—it’s a journey. Sofia Stars is part of a big family housing over 3,000 employees and close to 1,000 R&D professionals. Join a global team creating software for over 5,000,000 daily users. We are looking for a skilled Senior DevOps Engineer in our office in Sofia City, Bulgaria. Opportunities & tasks - Manage, maintain, configure, and deploy kubernetes clusters - Improve existing helm charts and write new one - Work with Developers team to enhance production stability, availability, and reliability - Monitor system performance - Build and maintain CI / CD pipeline using GitLab - Work with Cloud Providers - Work with CloudFlare
Engineering Manager in Cybersecurity domain
CodiLime
⚲ Warszawa
20 000 - 25 000 PLN (B2B)
Wymagania
- DevOps
- network automation
- software development
O projekcie: Get to know us better CodiLime is a software and network engineering industry expert and the first-choice service partner for top global networking hardware providers, software providers and telecoms. We create proofs-of-concept, help our clients build new products, nurture existing ones and provide services in production environments. Our clients include both tech startups and big players in various industries and geographic locations (US, Japan, Israel, Europe). While no longer a startup - we have 250+ people on board and have been operating since 2011 we’ve kept our people-oriented culture. Our values are simple: - Act to deliver. - Disrupt to grow. - Team up to win. The project and the team We need to onboard a highly qualified manager to help our teams achieve their goals. The project is not homogenous, it comprises multiple streams varying from development to delivery. The heart of the project is security. Together with our client, a global cybersecurity leader, we are working on delivering projects and solutions to increase internal effectiveness and provide services to organizations around the world. The team is international and multicultural, spread across the world. The project team consists of highly skilled network, DevOps, and software engineers. Your role We are seeking an Engineering Manager to lead a cybersecurity project. You need to have a strong technical background and deep understanding of the domain, and to have some network automation or software development experience. You will be responsible for the successful delivery and the sustaining of the current pace of growth of the project. Your responsibilities will include: More reasons to join us - Flexible working hours and approach to work: fully remotely, in the office or hybrid - Professional growth supported by internal training sessions and a training budget - Solid onboarding with a hands-on approach to give you an easy start - A great atmosphere among professionals who are passionate about their work - The ability to change the project you work on Wymagania: Do we have a match? As an Engineering Manager, you must meet the following criteria: - Minimum 3 years as an Engineering Manager, Technical Manager, or similar role with the responsibility of managing projects for both T&M and fixed-price projects - Experience with managing network automation/software development projects - Managing and shaping a 7+ person team delivering a complex project - Hands-on experience (3-4 years) as a Software/Network/DevOps engineer working in customer-facing environment - Willingness to work flexible hours (EMEA & US timezone) in a multicultural environment - Strong communication skills in English, both written and verbal (C1 level minimum) Beyond the criteria above, we would appreciate: - Experience in working as a highly specialized vendor for a global company - Experience in working in vendor’s Professional Services organization Codzienne zadania: - Making sure that the delivery of the project is smooth by removing any obstacles and aligning the expectations of the stakeholders - Working daily with Solution Leads on the customer side to identify needs and opportunities within the project - Collaborating with other managers already on the projects and contributing to common deliverables - Facilitating and taking an active part in project roadmap discussions, also at a technical level - People management and team expansion activities - Visiting the Customer a few times a year (both in Europe and the US) - Identifying upsell and cross-sell opportunities
Engineer, Post Trade (Blockchain)
LSEG
⚲ London
Wymagania
- Java
- Blockchain
- Kubernetes
- Docker
- Kafka
Role Profile The successful candidate for the Engineer, Post Trade role, will be working with the Director Technical Delivery Solution and Delivery, will form part of a team building a complex, ground-up cloud-based critical market infrastructure service in a bold new venture for LSEG. This opening requires a candidate who takes great pride in delivering excellence with excellent logical and technical skills and a can-do attitude combined with a helpful mentality, and a wish to play a critical role in forming and growing a new business. Key Responsibilities A strong focus on engineering excellence and coding, adopting an open and hands-on approach to problem-solving and delivery. Engage deeply in technical design and implementation to ensure solutions are robust, scalable, and aligned with industry standards. Actively contribute to all stages of the product engineering life cycle—solutioning, design, coding, and testing—while promoting collaboration and transparency within the team to drive high-quality outcomes. Demonstrate ownership and pride in work, proactively taking on new responsibilities aligned with product engineering needs. Embrace and apply LSEG engineering principles, diving deep technically to build with purpose and foster excellence within the team through open collaboration. Create an environment of engagement, challenge, and constructive questioning, ensuring trust and respect are maintained and a strong one-team mentality is upheld Key Behaviours - Delivery-focused: Committed to meeting deadlines and managing stakeholder expectations. - Accountable: Takes ownership and responsibility for outcomes. - Collaborative: Works effectively within cross-functional teams and fosters teamwork. - Communicative: Champions clear, respectful, and constructive communication. - Quality-driven: Maintains high standards in code quality, testing, and CI/CD practices. - Adaptable & Innovative: Eager to learn, improve, and embrace new technologies. - Critical yet Respectful: Challenges ideas constructively while maintaining professionalism. - Engineering Mindset: Passionate about solving problems and minimizing complexity. Career Stage: Senior Associate London Stock Exchange Group (LSEG) Information: Join us and be part of a team that values innovation, quality, and continuous improvement. If you're ready to take your career to the next level and make a significant impact, we'd love to hear from you. LSEG is a leading global financial markets infrastructure and data provider. Our purpose driving financial stability, empowering economies and enabling customers to create sustainable growth. Our purpose is the foundation on which our culture is built. Our values of Integrity, Partnership, Excellence and Change underpin our purpose and set the standard for everything we do, every day. They go to the heart of who we are and guide our decision making and everyday actions. Working with us means that you will be part of a dynamic organisation of 25,000 people across 65 countries. However, we will value your individuality and enable you to bring your true self to work so you can help enrich our diverse workforce. You will be part of a collaborative and creative culture where we encourage new ideas. We are committed to sustainability across our global business and we are proud to partner with our customers to help them meet their sustainability objectives. Our charity, the LSEG Foundation provides charitable grants to community groups that help people access economic opportunities and build a secure future with financial independence. Colleagues can get involved through fundraising and volunteering.
Ansible Engineer
Experis Polska
⚲ Cracow, Warsaw
23 520 - 25 200 PLN (B2B)
Wymagania
- Ansible
- Unix
- Shell
- CI/CD Pipelines
- Jenkins
- GitHub
- Cloud
- AWS
- Azure
- GCP
- Git
- AWX
- YAML
- Python
- Bash
- Script
- Operating system
- Linux
- Windows
- PostgreSQL
- K8s
- HashiCorp Vault (nice to have)
O projekcie: Hybrid work model: Cracow or Warsaw 5-6 days per month (mandatory) B2B: 140-150 PLN/h net What we offer: - Private healthcare - Multisport card - Life insurance - PowerYou – e-learning platfrom Opis Firmy Experis to światowy lider rekrutacji specjalistów i kadry zarządzającej w kluczowych obszarach IT. Z nami znajdziesz konkurencyjne oferty zatrudnienia oraz ciekawe projekty IT skierowane zarówno do ekspertów z wieloletnim doświadczeniem, jak i osób, które dopiero zaczynają swoją przygodę w branży IT. Oferujemy rekrutacje menedżerów i wysoko wykwalifikowanych konsultantów z doświadczeniem w branży IT. Experis jest częścią ManpowerGroup i został uznany za jedną z najbardziej etycznych firm na świecie. Wymagania: - Requirements: - Hands-on in performing Ansible system administration activities - Strong Ansible Skills (AWX or AAP) - Proficient in writing playbooks, using roles, and managing Ansible Galaxy - Solid understanding of YAML syntax is crucial for writing playbooks - Proficiency in Python is essential - Experience in developing scripts in bash (Unix shell script) with some level of understanding of Server Operating Systems - Fundamental knowledge of both Linux and Windows environments - Experience with database technology such as PostgreSQL - Hands-on experience with tools like Jenkins, GitHub, Nexus - Exposure to HashiCorp Vault is added advantage - Hands on experience on k8s is must - Familiarity with cloud services such as AWS, Azure, or GCP. - Ability to troubleshoot complex technical issues and find effective solutions. - Ability to work effectively in a team environment with cross-functional teams. Codzienne zadania: - Ansible system administration - Production support - Develop and maintain Ansible playbooks, roles, and modules for infrastructure automation and configuration management. - Write/Review Unix shell scripts - Automate repetitive operational tasks to improve efficiency and reliability. - Manage and optimize CI/CD pipelines using tools like Jenkins, GitHub, Nexus. - Integrate Ansible automation with cloud environments (AWS, Azure, GCP), HashiCorp Vault and/or Ansible Vault. - Work closely with infrastructure, development, and security teams to ensure smooth deployments and environment consistency. - Troubleshoot automation and deployment issues across environments. - Maintain version control using Git and ensure adherence to automation best practices. - Monitor system performance and implement proactive improvements. - Manage configuration drift, security compliance, and infrastructure monitoring. - Manage Ansible infrastructure with respect to High Availability - Lead deliveries to complete it within committed time
Senior Terraform Engineer
Luxoft DXC
Wymagania
- GitLab CI/CD
- Python
- AWS
- EKS
Luxoft, in partnership with a global leader in the automotive industry, is bringing the “a mobile phone on wheels” vision to life by building the software platform of the future for automobiles. We are assembling a team of experts to design, develop, and operate a cutting-edge CI/CD platform that enables the creation of next-generation Android-based automotive software. In this role, you will help build and maintain a fully automated, secure, scalable, and reliable engineering ecosystem, applying DevOps best practices across infrastructure, automation, cloud services, monitoring, and software delivery. Our primary tech stack includes AWS, Terraform, GitLab, Python, EKS, and Grafana stack. Join us to solve complex technical challenges and help shape the future of the automotive industry. - Design, build, maintain, and improve a modern CI/CD platform for Android-based automotive software development. - Develop, optimize, and support build, test, and deployment pipelines in GitLab CI to provide fast and reliable feedback to development teams. - Manage and improve cloud infrastructure and platform services in AWS using Infrastructure as Code principles. - Operate and enhance Kubernetes environments with a focus on Amazon EKS. - Build and maintain monitoring and observability solutions using the Grafana stack. - Automate operational workflows, environment provisioning, monitoring, and maintenance tasks using Python and Terraform. - Ensure platform reliability, security, scalability, and high availability across CI/CD systems and supporting infrastructure. - Troubleshoot and resolve issues across pipelines, infrastructure, Kubernetes workloads, and platform tooling. - Collaborate closely with development teams to improve developer experience, delivery speed, and engineering efficiency. - Contribute to DevOps standards, best practices, and continuous improvement initiatives across the platform.
Senior Python Developer for Virtual Infrastructures
Luxoft DXC
Wymagania
- Python
- PyTest
- Elasticsearch
- Docker
- Kubernetes
As a Senior Software Engineer in a Virtual Infra environment, you will be part of the client's team to help scale and facilitate our product development and testing process. In this role, you architect and evolve scalable test infrastructure that reduces developer friction and improves deployment confidence. Advocate for best practices in code quality, testability, and CI/CD across the broader engineering organization. - Develop and maintain our automated system test infrastructure by writing code in Python - Design and develop analytics and indexes to characterize release quality, or help identify flaky tests or misbehaving infrastructure component using Elasticsearch, Logstash and Kibana - Be part of the Engineering team to identify new requirements, and add new features into the test infrastructure that accommodate for new features' development into the product - Monitoring the systems composing the infrastructure for regressions or failures and react to them promptly to unblock engineering team. Identify and proactively tackle issues that impact developer productivity. Identify bottlenecks and plan for growth.
Tech Lead Devops
Link Group
⚲ Remote
28 560 - 33 600 PLN (B2B)
Wymagania
- DevOps
- Google cloud platform
- Cloud platform
- CD pipeline
- GitHub
- Azure
- Terraform
- Infrastructure as Code
- SSO
O projekcie: We are looking for an experienced DevOps Tech Lead to guide and coordinate the work of two DevOps-focused teams responsible for building and maintaining modern cloud infrastructure and data platforms. In this role, you will combine technical leadership, architecture oversight, and hands-on DevOps expertise to ensure scalable, secure, and reliable delivery pipelines. You will work closely with engineering, data, and business stakeholders to translate requirements into robust technical solutions, while supporting teams in building and maintaining high-quality cloud environments. Wymagania: - Proven experience in a DevOps Engineer, DevOps Lead, or DevOps Tech Lead role. - Strong hands-on experience with Google Cloud Platform. - Experience designing and maintaining CI/CD pipelines with GitHub, GitHub Actions, and Azure DevOps. - Advanced knowledge of Terraform and Infrastructure as Code practices. - Experience working with cloud-native services and container-based environments. - Understanding of solution architecture principles. - Experience working with authentication and identity management (SSO, PingID or similar tools). - Strong problem-solving skills and ability to lead technical discussions. - Experience collaborating with business stakeholders and cross-functional teams. Codzienne zadania: - a) Technical Leadership: - Provide technical leadership for two DevOps teams working on cloud infrastructure, pipelines, and data platform components. - Support engineers in designing scalable and maintainable cloud architectures. - Guide teams in implementing best practices in DevOps, CI/CD, and infrastructure automation. - Ensure consistency of architecture, tooling, and development standards across teams. - b) DevOps & Cloud Engineering - Design and oversee CI/CD pipelines using GitHub, GitHub Actions, and Azure DevOps. - Build and maintain infrastructure on Google Cloud Platform, including services such as Google Cloud Run. - Implement Infrastructure as Code using Terraform. - Ensure reliable, automated deployment processes and maintain DevOps pipelines across environments. - Support integration of authentication and identity services such as PingID and SSO solutions. - c) Architecture & Data Platforms - Contribute to solution architecture decisions for cloud-native platforms and data-driven systems. - Support the development of data pipelines and analytical environments, particularly in domains related to life sciences or bioinformatics. - Collaborate with architects and engineering teams to ensure alignment between architecture and delivery. - d) Business & Stakeholder Collaboration - Work closely with business analysts and stakeholders to translate business needs into technical solutions. - Ensure clear communication between architecture, DevOps, and product teams. - Facilitate technical discussions and support decision-making for complex technical challenges.
Software Engineer - Email Delivery
Twilio
Wymagania
- Java
- Spring
- Hibernate
- React
- Docker
Who we are At Twilio, we’re shaping the future of communications, all from the comfort of our homes. We deliver innovative solutions to hundreds of thousands of businesses and empower millions of developers worldwide to craft personalized customer experiences. Our dedication to remote-first work, and strong culture of connection and global inclusion means that no matter your location, you’re part of a vibrant team with diverse experiences making a global impact each day. As we continue to revolutionize how the world interacts, we’re acquiring new skills and experiences that make work feel truly rewarding. Your career at Twilio is in your hands. We use Artificial Intelligence (AI) to help make our hiring process efficient. That said, every hiring decision is made by real Twilions! See yourself at Twilio Join the team as our next Software Engineer - Email Delivery About the job: This position is needed to build and maintain reliable applications for Twilio’s supply insights and trust. The work involves developing back-end applications and front-end for internal tools. As a Software Engineer in the team, you will be partnering with product managers, architects, engineering managers and other engineers to develop features for Messaging Supply products. You will be developing our messaging supply platform with emphasis on interfaces for Twilio’s suppliers to interact with Twilio, automation of manual tasks, and working on new features that support both internal and customer-facing applications. Responsibilities: In this role, you’ll: * Design, develop, test and deploy features alongside a small, distributed collaborative team to build highly scalable and available services * Collaborate other cross-functional teams, product managers, designers, and engineers to build compelling user experiences for developers and end users * Ensure quality by writing unit, integration, and load tests, as well as conducting thorough code reviews. * Work independently to troubleshoot/determine resolution for issues in your team's domain * Build new features for both internal and customer-facing applications to ensure seamless integration and great customer experience Location This role will be remote from Estonia. Travel We prioritize connection and opportunities to build relationships with our customers and each other. For this role, you may be required to travel occasionally to participate in project or team in-person meetings. Twilio thinks big. Do you? We like to solve problems, take initiative, pitch in when needed, and are always up for trying new things. That's why we seek out colleagues who embody our values — something we call Twilio Magic. Additionally, we empower employees to build positive change in their communities by supporting their volunteering and donation efforts. So, if you're ready to unleash your full potential, do your best work, and be the best version of yourself, apply now! If this role isn't what you're looking for, please consider other open positions.
Software Engineer
Confluent
From 5 100 GBP (UoP)
Wymagania
- Go
- Kubernetes
- Terraform
- Datadog
- Prometheus
About the Role: The Confluent Cloud Platform based on Apache Kafka is the leading Cloud Native Platform as a Service for streaming data infrastructure, but this is just the beginning. We are building a PaaS enabling customers around the globe to deliver streaming applications. What You Will Do: - Design, implement and maintain Golang infrastructure services (typically implemented as Kubernetes operators) to deliver the Confluent cloud foundations to the wider engineering organization - Kubernetes, Terraform, Datadog, Prometheus and a strong command of Linux, public cloud and networking should come as standard - though Golang software engineering will be your primary focus - As a Confluent Cloud Infrastructure Software Engineer, you will be working alongside the rest of the Confluent engineers to build our PaaS product - You, and the rest of the team, will be responsible for the provisioning, availability, performance, monitoring, emergency response, and capacity planning of the Confluent cloud - If you love the hum of big data systems covering 10,000's of instances spread across 1000’s of kubernetes clusters in multiple cloud providers and thinking about how to make them run as smoothly and securely as possible, and want to have a big influence on the architecture plus operational design points of this new product, then you will fit right in. Company Introduction: We’re not just building better tech. We’re rewriting how data moves and what the world can do with it. With Confluent, data doesn’t sit still. Our platform puts information in motion, streaming in near real-time so companies can react faster, build smarter, and deliver experiences as dynamic as the world around them. It takes a certain kind of person to join this team. Those who ask hard questions, give honest feedback, and show up for each other. No egos, no solo acts. Just smart, curious humans pushing toward something bigger, together. One Confluent. One Team. One Data Streaming Platform.
Site Reliability Engineer (m/k/n)
Upvanta
⚲ Remote
25 200 - 30 450 PLN (B2B)
Wymagania
- DevOps
- SRE
- DNS
- TCP
- HTTP
- BGP
- Proxy
- Datadog
- Grafana
- Terraform
- Ansible
- AWS Lambda
- GitLab CI (nice to have)
- Unix (nice to have)
- Linux (nice to have)
- CDN (nice to have)
O projekcie: Do zespołu CDN Operations poszukujemy Site Reliability Engineera, który będzie odpowiedzialny za niezawodność, wydajność i rozwój globalnej infrastruktury CDN wspierającej dystrybucję treści OTT (streaming). Wymagania: Wymagania - Wykształcenie techniczne (Informatyka / Sieci / Telekomunikacja). - Minimum 4–5 lat doświadczenia w rolach SysOps, DevOps lub SRE. - Bardzo dobra znajomość zagadnień sieciowych: DNS, TCP, HTTP, routing (BGP), caching, proxy. - Pasja do automatyzacji, observability i optymalizacji wydajności. - Znajomość narzędzi monitoringu (np. Datadog, Grafana). - Doświadczenie z narzędziami DevOps (Terraform, Ansible, AWS Lambda, GitLab CI/CD). - Bardzo dobra znajomość systemów Unix/Linux. - Mile widziane doświadczenie z CDN lub streamingiem OTT / QoS. - Samodzielność, dobra organizacja pracy i analityczne podejście do problemów. - Umiejętność komunikacji zarówno z zespołami technicznymi, jak i nietechnicznymi. - Bardzo dobra znajomość języka angielskiego (znajomość francuskiego będzie dodatkowym atutem). Codzienne zadania: - Zapewnienie dostępności, niezawodności i wydajności platform CDN (cloud, serwery bare-metal, sieci międzynarodowe, punkty wymiany ruchu internetowego, cache hostowane u ISP). - Regularna analiza pojemności CDN, trendów wydajności oraz prognoz ruchu w celu obsługi rosnącego globalnego zapotrzebowania. - Wsparcie wdrożeń i rolloutów produkcyjnych oraz analiza wzorców konsumpcji treści OTT w różnych regionach. - Monitorowanie kluczowych wskaźników wydajności (latencja, throughput, efektywność cache, poziom błędów) i proponowanie optymalizacji jakości usług oraz kosztów infrastruktury. - Udział w obsłudze incydentów, analizie root cause oraz działaniach zwiększających niezawodność systemów. - Okazjonalne wsparcie zespołów DevOps przy zadaniach operacyjnych. - Budowa i utrzymanie stacku obserwowalności dla platform CDN (logi, metryki, trace’y, alerty) – obecnie opartego o Datadog. - Tworzenie i utrzymywanie szablonów monitoringu (dashboardy, reguły alertów, widoki SLO/SLA, pipeline’y logów). - Projektowanie skalowalnych rozwiązań monitorujących obsługujących duże wolumeny danych i ruchu. - Wdrażanie automatycznych health checków, detekcji anomalii oraz alertowania dla środowiska 24/7. - Ulepszanie sposobu zbierania, korelacji i wizualizacji danych tak, aby zespoły techniczne i biznesowe mogły szybko analizować zachowanie platformy i doświadczenie użytkowników. - Tworzenie i rozwijanie skryptów oraz automatyzacji (Python, Bash, API) do zbierania metryk operacyjnych, danych kosztowych i wydajnościowych. - Budowa narzędzi wewnętrznych wykorzystywanych przez zespoły w różnych krajach, m.in.: narzędzia do analizy logów, wizualizacja ruchu i widowni, walidatory konfiguracji CDN, narzędzia diagnostyczne i troubleshootingowe, testy cache - Automatyzacja operacji poprzez Infrastructure as Code (Terraform), pipeline’y CI/CD oraz automatyczne rollouty konfiguracji. - Ścisła współpraca z zespołami OTT Engineering, DevOps, Network, Security, Data oraz jednostkami biznesowymi w różnych regionach. - Współtworzenie
Senior SOC Engineer (L6) - 100% remote
Apollo.io
25 000 - 38 000 PLN (UoP)
Wymagania
- Python
- Panther
- GCP
- Ruby
Role Overview The Security Operations Engineer is a senior individual contributor responsible for detecting, investigating, and responding to security threats across Apollo’s cloud-native and SaaS environments. This role requires strong technical depth, independent judgment, and ownership of complex security investigations from intake through resolution. This role operates in a fully remote environment and emphasizes clear written communication, operational rigor, and effective collaboration. Key ResponsibilitiesIncident Detection, Investigation & Response - Monitor, triage, and investigate security alerts and events across cloud infrastructure, SaaS applications, and corporate systems. - Conduct end-to-end security investigations, including scoping, containment, eradication, recovery, and documentation. - Own investigations independently while collaborating effectively during high-severity incidents. SIEM, Detection & Workflow Engineering - Configure and maintain SIEM detections in Panther, including use cases, correlation rules, alert logic, and tuning. - Onboard, validate, and maintain log sources to ensure visibility, accuracy, and reliability. - Design and improve investigation and response workflows to streamline triage, escalation, and resolution. - Leverage AI-assisted tools to accelerate alert analysis, enrichment, and investigation efficiency. Threat Hunting & Proactive Security - Perform proactive threat-hunting activities to identify malicious or anomalous behavior not surfaced by existing detections. - Investigate abuse, fraud, account compromise, and automation misuse scenarios in close collaboration with Fraud teams. - Identify detection gaps and propose, implement, and validate improvements. Automation, Coding & Tooling - Build scripts, automations, and tools to reduce manual work and improve response speed and consistency. - Use Python extensively for analysis, automation, and internal tooling; Ruby experience is a plus. - Contribute to internal detection frameworks, tooling, and shared libraries. Documentation & Continuous Improvement - Produce clear, high-quality documentation for incidents, investigations, and post-incident reviews. - Contribute to runbooks, playbooks, and operational standards. - Share knowledge, review peer work, and mentor other engineers.
Senior SIEM Engineer (L6) - 100% remote
Apollo.io
25 000 - 38 000 PLN (UoP)
Wymagania
- Python
- Ruby
- GCP
- AWS
Role Overview The Security Operations Engineer is a senior individual contributor responsible for detecting, investigating, and responding to security threats across Apollo’s cloud-native and SaaS environments. This role requires strong technical depth, independent judgment, and ownership of complex security investigations from intake through resolution. This role operates in a fully remote environment and emphasizes clear written communication, operational rigor, and effective collaboration. Key ResponsibilitiesIncident Detection, Investigation & Response - Monitor, triage, and investigate security alerts and events across cloud infrastructure, SaaS applications, and corporate systems. - Conduct end-to-end security investigations, including scoping, containment, eradication, recovery, and documentation. - Own investigations independently while collaborating effectively during high-severity incidents. SIEM, Detection & Workflow Engineering - Configure and maintain SIEM detections in Panther, including use cases, correlation rules, alert logic, and tuning. - Onboard, validate, and maintain log sources to ensure visibility, accuracy, and reliability. - Design and improve investigation and response workflows to streamline triage, escalation, and resolution. - Leverage AI-assisted tools to accelerate alert analysis, enrichment, and investigation efficiency. Threat Hunting & Proactive Security - Perform proactive threat-hunting activities to identify malicious or anomalous behavior not surfaced by existing detections. - Investigate abuse, fraud, account compromise, and automation misuse scenarios in close collaboration with Fraud teams. - Identify detection gaps and propose, implement, and validate improvements. Automation, Coding & Tooling - Build scripts, automations, and tools to reduce manual work and improve response speed and consistency. - Use Python extensively for analysis, automation, and internal tooling; Ruby experience is a plus. - Contribute to internal detection frameworks, tooling, and shared libraries. Documentation & Continuous Improvement - Produce clear, high-quality documentation for incidents, investigations, and post-incident reviews. - Contribute to runbooks, playbooks, and operational standards. - Share knowledge, review peer work, and mentor other engineers.
Senior DevOps / Site Reliability Engineer
N-iX
⚲ Gdynia, Wrocław, Krakow
5 000 - 6 000 PLN (B2B)
Wymagania
- Python
- Azure
- Terraform
- Docker
- Atlassian
- Datadog
- Ansible (nice to have)
O projekcie: #4936 Our client is a leader in sustainable packaging solutions, leveraging cutting-edge cloud technologies to enhance production, operational excellence, and innovation. Join our team and contribute to eco-friendly advances with state-of-the-art technology and DevOps practices. Working hours 11:00 - 19:00 CET Wymagania: - Technical Skills: - Microsoft Azure (App Services, VM, Container Instances, AKS, SQL Server, Azure SQL). - Git, Github, Github Actions. - SonarQube Cloud, Terraform, Docker. - Datadog (extensive), SRE concepts (SLOs, SLIs, golden signals, instrumentation). - Incident management, dashboard development, business reporting. - Shell scripting, YAML/JSON configs, Python. - Ubuntu, RHEL, CentOS, Windows/Server (entry). - Process & Frameworks: - Atlassian Suite (Jira/Confluence). - ITSM / ITIL familiarity. - AI tools (Claude, Github CoPilot, etc.). Nice-to-Have Qualifications - Microsoft Azure: App Insights, IoT Hub, Azure DevOps, API Management. - DevOps: Ansible, Argo CD, CodeRabbit, Artifactory. - SRE Practices: Capacity planning, cost optimization. - Programming Languages: JavaScript, PowerShell. - Other Tools: Snowflake, PagerDuty, Salesforce MuleSoft Anypoint. Education & Experience - Bachelor’s degree in Computer Science, Engineering, or related field (or equivalent experience). - 5+ years demonstrated experience in DevOps, SRE, or cloud engineering roles. - Analytical thinker, problem solver, and proactive communicator. - Strong collaboration skills, especially across cross-functional and remote teams. - Ability to thrive in a fast-paced, innovative business environment Codzienne zadania: - Cloud Infrastructure: Architect, implement, and manage Microsoft Azure resources including App Services, Virtual Machines, Container Instances, AKS, SQL Server/Instance, and Azure SQL. - DevOps Automation: Design and maintain CI/CD workflows using Git, Github Actions, SonarQube Cloud, Terraform, and Docker. - SRE Practices: Develop and monitor SLOs, SLIs, and golden signals; instrument applications and infrastructure; build Datadog dashboards for real-time business and incident reporting. - Incident Management: Lead incident response, root cause analysis, and post-mortem documentation. Maintain high availability and rapid recovery for business-critical systems. - Monitoring & Observability: Extensive use of Datadog for monitoring, logging, and performance analytics. - Configuration Management: Work with Shell, YAML, JSON, and Python for scripting, automation, and configuration. - System Administration: Administer Ubuntu, RHEL, CentOS, and (entry-level) Windows Server environments. - Collaboration: Utilize Atlassian Suite (Jira, Confluence) for documentation, ticketing, and project tracking; contribute to ITSM/ITIL frameworks. - AI & Productivity Tools: Integrate and leverage tools like Claude, Github CoPilot, and other AI productivity solutions. - Reporting: Create dashboards and business reports to provide actionable insights and drive continuous improvement.
Process Automation Developer (Python / AWS)
Verita HR
⚲ Remote
21 840 - 25 200 PLN (B2B)
Wymagania
- Python
- AWS
- CI/CD
- Linux (nice to have)
- RPA (nice to have)
O projekcie: - Stable, long-term projects - Competitive salary with a B2B contract - Remote work - Private healthcare and multisport card Wymagania: - Bardzo dobra znajomość języka angielskiego. - Bardzo dobra znajomość Python (czysty, testowalny kod). - Doświadczenie z usługami AWS: Lambda, EC2, Fargate, S3, RDS, Step Functions, EventBridge, SNS, SQS, VPC. - Doświadczenie w integracjach systemowych (API, bazy danych, message brokers). - Umiejętność analizy procesów biznesowych i przekładania ich na rozwiązania techniczne. - Praktyczne doświadczenie z kontenerami (Docker lub Podman). - Doświadczenie z CI/CD (GitHub Actions lub podobne). - Znajomość IaC: Terraform, AWS CDK lub CloudFormation. Nice to Have - Dobra znajomość Linux (CLI, administracja). - Doświadczenie z RPA (UiPath lub podobne). - Znajomość Snowflake, Exasol lub innych hurtowni danych. - Język niemiecki. Codzienne zadania: - Projektowanie i implementacja automatyzacji w Pythonie. - Tworzenie integracji z systemami poprzez REST API (m.in. Teams, SharePoint, aplikacje wewnętrzne). - Praca z Apache Kafka oraz bazami danych: PostgreSQL, MariaDB, Snowflake, Exasol, Oracle. - Przetwarzanie danych w formatach: Excel, CSV, JSON. - Budowa i utrzymanie rozwiązań cloud‑native w AWS (Lambda, EC2, Fargate). - Praca z usługami S3, RDS, Step Functions, EventBridge, SNS, SQS. - Konfiguracja elementów sieciowych: VPC, Route 53, Systems Manager. - Tworzenie i rozwój pipeline’ów wdrożeniowych (GitHub Actions → AWS). - Zarządzanie środowiskami Linux (EC2). - Budowa i utrzymanie kontenerów (Docker / Podman). - Dbanie o jakość kodu: testy jednostkowe, linting, code review, dokumentacja. - Teamwork & Collaboration - Współtworzenie standardów technicznych i roadmapy zespołu. - Dokumentowanie rozwiązań i dzielenie się wiedzą. - Współpraca z biznesem i IT przy analizie procesów i projektowaniu automatyzacji.
Młodszy Specjalista / Młodsza Specjalistka ds. IT
Salesandbetter Sp. z o.o.
⚲ Warszawa
Wymagania
- Python
- Docker
Nasze wymagania: biegła znajomość inżynierii danych oraz uczenia maszynowego, mile widziane co najmniej 2-3 lata doświadczenia w pracy z językiem Python w środowisku chmury obliczeniowej. zaawansowana znajomość języka Python: biegła obsługa struktur danych, programowanie asynchroniczne i pobieranie danych za pośrednictwem interfejsów (API). doświadczenie w bazach danych wektorowych i pobieraniu danych: efektywne wyszukiwanie danych tekstowych przetworzonych matematycznie (wektorów). praktyczne doświadczenie w przetwarzaniu języka (NLP) z wykorzystaniem nowoczesnych modeli językowych (np. transformatorów). umiejętność automatyzacji (DevOps): konfigurowanie codziennych synchronizacji danych w środowisku chmury i konteneryzacja (Docker). precyzja w analizowaniu i przetwarzaniu złożonych struktur dokumentów, unikanie duplikatów w bazie danych. doświadczenie w hostingu w chmurze: zarządzanie instancjami wirtualnymi (Linux) i ich zabezpieczanie. samodzielne rozwiązywanie problemów związanych z przerwaniami połączenia lub zmianami w zewnętrznych źródłach danych. znajomość języka angielskiego. Mile widziane: wykształcenie inżynierskie z zakresu informatyki (licencjat/magister) doświadczenie na podobnym stanowisku znajmomość języka niemieckiego O projekcie: Projekt dotyczy stworzenia i automatyzacji skalowalnego procesu przetwarzania danych tekstowych w języku polskim, który ma wspierać zapytania oparte na sztucznej inteligencji. Zakres obowiązków: Tworzenie i automatyzacja skalowalnego potoku danych do przetwarzania dużych ilości danych tekstowych (polskich) oraz udostępnianie ich dla zapytań opartych na sztucznej inteligencji. Pipeline danych: opracowanie zautomatyzowanego procesu, który codziennie pobiera, filtruje i przetwarza nowe publiczne zestawy danych. Integracja backendowa: zapewnienie interfejsu, za pośrednictwem którego środowisko edytora wysyła zapytania do bazy danych i otrzymuje wyniki w ciągu milisekund. Zarządzanie infrastrukturą: monitorowanie kosztów chmury i optymalizacja mocy obliczeniowej dla codziennych cykli aktualizacji. Oferujemy: ofertujemy stabilne zatrudnienie długoletnią współpracę możliwość dokształcania i rozwoju pracę przy innowacyjnym projekcie pracę w międzynarodowym zespole
Mid/Senior DevOps Engineer
Motorola Solutions Systems Polska
⚲ Kraków
12 500 - 16 000 PLN (PERMANENT)
Wymagania
- Java
- Jenkins
- Groovy
- Python
- GitHub Actions
- KVM
- Testing
- Linux
- VLAN
- Windows
- Virtualization
- Cloud (nice to have)
- AWS (nice to have)
O projekcie: The Krakow PCR (Professional and Commercial Radio) department is responsible for delivering software for mission-critical and business-critical communications for public safety, humanitarian solutions, manufacturing, retail, logistics and many other verticals. Our team follows industry-standard and innovative software development practices such as continuous integration, continuous deployment, component and system-level testing. We use Agile methodologies to organize our work ranging from cross-functional small teams to Agile portfolio management. As owners, we are responsible for the end-to-end software development lifecycle, including specification and design, implementation, system level testing, delivery and maintenance. Technical stack: Java, Jenkins, Basic Groovy, Basic Python, Basic Github Actions , Basic Linux/KVM virtualization Wymagania: - Java - Jenkins - Basic knowledge of Groovy, - Basic knowledge of Python - Basics of GitHub Actions - Basic knowledge of IP networks (VLANs, routing protocols) - Knowledge of operating systems such as Windows and Linux, - Willingness to develop towards test automation - Comfortable working hands-on with hardware, devices configuration, cabling - Experience for ability to learn and work in Agile/Scrum, - Basic knowledge of virtualization - i.e. KVM - Basic knowledge of testing methodologies - Good analytical skills, - Troubleshooting and problems solving skills, - Focus on quality and attention to details, - Degree in Computer Science, Telecommunications, Electronics, Automation and Robotics, Information and Communications technologies or any related fields. - English B2 - Hybrid work model: 2 days from the office. During trial period possibility to be in the office more often. Additional Skills: - Basic cloud (AWS) knowledge - Any of programming certificates Codzienne zadania: - The responsibilities of the DevOps Engineer (with programming) position include an active role in CI/CD pipelines development and tests automation. Which includes responsibilities such as: creation and maintenance of the SW continuous integration and validation environments, SW builds environments, as well as coding, maintaining and executing the system and subsystem test scenarios, creating, running and enhancing the automation tests framework, design automation test scripts, automated test execution. The person hired for this position will be working with an experienced team that is fully located in Motorola Solutions in Poland/Cracow. Having full local access to the state of the art labs in the Cracow location, all equipment and systems needed to perform tasks
L3 SOC Analyst
Saviynt
Wymagania
- Python
- AWS
- Azure
- Kubernetes
- Splunk
Role Overview: We are establishing a modern Security Operations Centre designed to deliver proactive, intelligence-driven security outcomes. Moving beyond traditional reactive monitoring, our SOC emphasises AI, automation, detection engineering, and deep cloud security visibility to identify and neutralise sophisticated threats at scale. The L3 SOC Analyst will act as the senior technical escalation point within the SOC, leading complex investigations, driving automation initiatives, and mentoring junior analysts. This role requires strong hands-on expertise across cloud security, threat hunting, incident response, and orchestration technologies. WHAT YOU WILL DO: - Incident Response & Technical Escalation - Act as the final escalation point for complex incidents originating from L1/L2 analysis. - Lead investigations into high-severity security events, including those impacting AWS, Azure, Kubernetes clusters and hybrid environments. - Perform advanced forensic analysis across endpoints, cloud workloads, and network telemetry to determine root cause, impact, and remediation actions. - Correlate telemetry from SIEM, EDR, CSPM, and cloud-native sources to identify sophisticated attack chains. - Security Automation & SOAR Engineering - Design, develop, and maintain automated response playbooks within the SOAR platform to improve response efficiency. - Build and maintain automation scripts (Python, go, etc.) for alert enrichment, evidence collection, and containment. - Integrate security platforms via APIs to enable streamlined, automated detection and response workflows. - Identify opportunities to reduce Mean Time to Detect (MTTD) and Mean Time to Respond (MTTR) through automation and process optimisation. - Threat Hunting & Detection Engineering - Conduct proactive threat hunting across enterprise and cloud environments using intelligence-driven and hypothesis-based methodologies. - Serve as an SME for cloud security monitoring leveraging tools such as AWS GuardDuty, CloudTrail, CrowdStrike, and Proofpoint. - Develop and tune SIEM detections, correlation rules, and EDR queries aligned to MITRE ATT&CK tactics and emerging threat intelligence. - Mentorship & Continuous Improvement - Provide technical mentoring and guidance to L1/L2 analysts to strengthen SOC capability. - Maintain and enhance SOC documentation including SOPs, runbooks, and response playbooks. - Analyse incident trends and operational metrics to recommend improvements in detection coverage, automation effectiveness, and security posture.
Kubernetes Administrator
Link Group
⚲ Warszawa
20 160 - 28 560 PLN (B2B)
Wymagania
- Kubernetes
- OpenShift
- Linux
- Networking
- Storage
- SAN
- Ansible
O projekcie: Kubernetes Administrator Wymagania: What we need: - very good knowledge of Kubernetes and Openshift - knowledge of: Linux and Networking - Storage skills (NAS/ SAN / FC) - Ansible and scripting skills for developing new automation. Codzienne zadania: - Maintain and enhance solutions for monitoring, automation, and reporting. - Conduct technical reviews and stay current with new features. - Suggest and implement innovative infrastructure solutions. - Engage with vendors (opening cases, conducting technical reviews, participating in forums, etc.). - Collaborate closely with customers to assist them in cloud application transformation. - Develop new automation solutions.
DevOps Cloud Engineer
Link Group
⚲ Remote
20 160 - 25 200 PLN (B2B)
Wymagania
- DevOps
- Cloud
- Google cloud platform
- Google Cloud
- Cloud platform
- Microsoft Azure
- CD
- GitHub
- Infrastructure as Code
- Terraform
- Python
O projekcie: We are looking for a skilled DevOps Engineer to join our technology team and help design, implement, and maintain scalable cloud infrastructure and modern CI/CD pipelines. In this role, you will work closely with development and platform teams to automate processes, improve deployment efficiency, and ensure the reliability and security of cloud-based systems. The ideal candidate has strong experience with cloud platforms, infrastructure as code, and modern DevOps practices, with particular expertise in Google Cloud Platform and CI/CD tooling. Wymagania: - Proven experience as a DevOps Engineer or in a similar cloud/infrastructure role. - Strong hands-on experience with Google Cloud Platform at a principal or expert level. - Solid experience with Microsoft Azure. - Hands-on experience with CI/CD tools such as Azure DevOps, GitHub, and GitHub Actions. - Strong knowledge of Infrastructure as Code tools, particularly Terraform. - Good scripting and automation skills using Python. - Experience with cloud architecture, automation, and deployment best practices. - Strong troubleshooting and problem-solving skills. - Ability to work collaboratively in cross-functional teams. - Good communication skills and proficiency in English. Codzienne zadania: - Design, implement, and maintain CI/CD pipelines using Azure DevOps, GitHub, and GitHub Actions. - Build and manage scalable cloud infrastructure on Google Cloud Platform and Microsoft Azure. - Develop and maintain infrastructure using Infrastructure as Code practices with Terraform. - Automate operational and deployment processes using Python. - Monitor and optimize cloud environments for performance, reliability, and cost efficiency. - Collaborate with development teams to improve application deployment processes and platform reliability. - Implement best practices in security, access control, and cloud governance. - Troubleshoot infrastructure and deployment issues across environments.
Cloud/Containers Security - Enterprise Engineer
Antal
⚲ Kraków, Warszawa
36 960 - 40 320 PLN (B2B)
Wymagania
- AWS
- Amazon EKS
- Kubernetes
- GCP
- GKE
O projekcie: Poszukujemy doświadczonej osoby na stanowisko Enterprise Engineer - Cloud/Containers Security, która będzie odpowiedzialna za projektowanie i rozwój rozwiązań w obszarze bezpieczeństwa środowisk cloud-native, ze szczególnym uwzględnieniem ochrony endpointów w ramach usług klasy Cloud Native Application Protection (CNAPP). Osoba na tym stanowisku będzie współpracować z zespołami inżynierii oprogramowania, platform chmurowych oraz operacji, dbając o wysoką dostępność, wydajność i skuteczność wdrażanych usług bezpieczeństwa. Warunki współpracy: - Model pracy hybrydowej: obecność w biurze 6 dni w miesiącu - Lokalizacja: Kraków lub Warszawa - Stawka: 220–240 zł/h - Forma współpracy: contracting, B2B Benefity: - Prywatna opieka medyczna - Karta Multisport lub dostęp do kafeterii MyBenefit - Ubezpieczenie na życie Wymagania: - Praktyczne doświadczenie w pracy z narzędziami bezpieczeństwa endpointów (np. CrowdStrike, Wiz Sensor, Microsoft Defender). - Doświadczenie w integracji rozwiązań bezpieczeństwa środowisk chmurowych i kontenerowych z systemami SIEM lub narzędziami analitycznymi. - Praktyczna znajomość technologii chmurowych i kontenerowych (np. AWS, EKS, Kubernetes, GCP, GKE). - Umiejętność efektywnej współpracy z różnymi interesariuszami oraz zarządzania relacjami. - Dbałość o szczegóły oraz umiejętność pracy w oparciu o podejście risk-based (zarządzanie ryzykiem). Codzienne zadania: - Zarządzanie bezpieczeństwem endpointów w środowiskach chmurowych i kontenerowych (np. CrowdStrike, Wiz, Falco), w tym konfiguracja agentów runtime zgodnie z najlepszymi praktykami i standardami organizacyjnymi. - Przegląd i dostrajanie detekcji (detection tuning), tworzenie własnych reguł detekcyjnych, onboarding, walidacja wdrożeń oraz wycofywanie rozwiązań. - Definiowanie wymagań raportowych w celu zapewnienia skutecznego działania mechanizmów bezpieczeństwa w całym środowisku. - Monitorowanie wyników działania systemów, identyfikacja luk w danych, analiza dokładności detekcji oraz eliminowanie fałszywych alarmów (false positives). - Współpraca z zespołami bezpieczeństwa operacyjnego oraz właścicielami usług w zakresie triage i analiz incydentów. - Wsparcie w zakresie przygotowywania dowodów konfiguracyjnych oraz metryk operacyjnych na potrzeby audytów i interesariuszy zewnętrznych (np. regulatorów). - Wdrażanie praktyk zarządzania zmianą w zakresie aktualizacji polityk i konfiguracji bezpieczeństwa.
Cloud Engineer
Omada
⚲ Warszawa
18 000 - 25 000 PLN (PERMANENT)
Wymagania
- Azure
- AKS
- Azure SQL
- Azure DevOps
- PowerShell
- ARM Templates
- Networks
- Bicep (nice to have)
- Terraform (nice to have)
- Helm (nice to have)
- Docker (nice to have)
O projekcie: We are seeking a Cloud Engineer with strong experience in Microsoft Azure, Azure DevOps, and PowerShell automation. You will be responsible for building, deploying, and maintaining cloud infrastructure based primarily on ARM Templates and Azure CLI/PowerShell. What we offer: - Hybrid work policy (at least 2-3 days per month from the office) - 5 additional vacation days - Benefit platform (500 PLN per month) - Support for your ongoing growth & development - Team-building activities - Great work environment Wymagania: - 3–5 years experience as Cloud Engineer / DevOps / Infrastructure Engineer. - Strong hands-on experience with Microsoft Azure (VMs, VNets, App Gateway, NSG, Azure SQL, AKS). - Proficiency with Azure DevOps (Repos, Pipelines, Artifacts). - Strong PowerShell scripting abilities. - Experience writing and maintaining ARM Templates (JSON-based IaC). - Skilled in Azure CLI and PowerShell for infrastructure deployment. - Understanding of networking: load balancers, routing, DNS, SSL, firewalls. Nice-to-Have - Experience with migration from ARM to Bicep or Terraform (future direction). - Knowledge of Helm, Docker, Terraform. - Azure certifications (AZ-104 / AZ-303/304 / AZ-400). Soft Skills - Strong analytical and troubleshooting skills. - Good communication and documentation habits. - Self-driven with ability to operate independently in a remote setting. - Ability to collaborate with cross-functional teams (Dev, Security, Ops). Codzienne zadania: - Design, deploy, and maintain Azure resources, including: Azure, Virtual Machines, Azure SQL, Azure Kubernetes Service (AKS), VNets, Subnets, Network Security Groups (NSG), Application Gateways + Web Application Firewall (WAF) - Work with Azure identity-integrated infrastructure components, including: QL, Azure Kubernetes Service (AKS), VNets, Subnets, Network Security Groups (NSG) Application Gateways + Web Application Firewall (WAF), Managed Identities (system- and user-assigned) for workloads, pipelines, automation, and service integrations, Microsoft Entra ID integration for authentication/authorization of Azure services, Enterprise Applications & Service Principals for app registration, access control, and workload identity, Configuration of API Permissions (delegated & application) for automation, Graph API access, and service integrations - Maintain and enhance existing infrastructure using: ARM Templates for IaC, Azure CLI / PowerShell - Ensure secure, scalable, and standardized deployments across environments. - CI/CD & Automation - Build and maintain Azure DevOps CI/CD pipelines (YAML preferred). - Automate infrastructure deployment using: ARM Templates, Azure CLI, PowerShell - Develop PowerShell automation for provisioning, configuration, monitoring, and maintenance tasks. - Integrate ARM-based IaC with DevOps workflows. - Monitor, troubleshoot, and optimize Azure workloads for performance, cost, and reliability. - Perform incident response, root cause analysis, and implement long-term fixes. - Maintain configuration baselines and documentation for deployed infrastructure. - Implement Azure security best practices across networking, identity, and data. - Configure and maintain firewall rules, NSGs, App Gateway/WAF, and RBAC. - Support compliance, auditing, and governance frameworks.
Senior Site Reliability Engineer
PandaDoc
⚲ Kyiv, Warsaw, Lisbon
Wymagania
- Python
- Kubernetes
- AWS
- Grafana
- PostgreSQL
Senior Site Reliability Engineer Site Reliability Engineers (SREs) are essential to PandaDoc's success, ensuring customers receive a reliable service with minimal downtime. The SRE team achieves this by: - Owning the incident management processes and tools. - Managing the observability stack and alerting systems to enable timely investigation and mitigation. - Actively contributing to service codebases to proactively prevent incidents and resolve performance bottlenecks. In essence, SREs are the cornerstone of production service resiliency, driving efforts in observability, incident management, capacity planning, and maintaining reliable operations. In this role, you will: - Own and influence the incident management process end-to-end - Maintain and evolve on-prem observability stack - Keep production applications running smoothly by participating in the on-call rotation - Develop automations and tools to support platform reliability - Contribute to production services with performance and resiliency in mind - Collaborate with product engineers to foster SRE principles within the R&D organization - Be a mentor for the SRE team or product engineers Company Overview: PandaDoc empowers more than 67,000 growing organizations to thrive by taking the work out of document workflow. PandaDoc provides an all-in-one document workflow automation platform that helps fast scaling teams accelerate the ability to create, manage, and sign digital documents including proposals, quotes, contracts, and more. For more information, please visit https://www.pandadoc.com/. Company Culture: We're known for our work-life balance, kind co-workers, & creative virtual team-bonding events. And although our Pandas are located across the globe, we stay connected with the help of technology and ensure that everyone on our team feels, well, like a team. Pandas work best when they're happy. We retain our talent by upholding our values of integrity & transparency, and selling a product that changes the lives of our customers. Check out our LinkedIn to learn more.
Senior PostgreSQL Architect / Database Reliability Engineer (DBRE)
SquareOne
⚲ Poznań
26 880 - 31 080 PLN (B2B)
Wymagania
- PostgreSQL
- MVCC
- WAL
- Autovacuum under load
- Query Planner
- Azure Database
- Google Cloud SQL
- Linux-based PostgreSQL
O projekcie: Senior PostgreSQL Architect / Database Reliability Engineer (DBRE) Mission of the Role Stability, Scale, and Efficiency We are looking for an engineer capable of designing and maintaining large-scale data systems (2TB+) while ensuring stability and cost efficiency. If your ideal system runs reliably because it was well-architected from the start, this role is for you. Our environment is hybrid (On-premise, Azure, GCP). This is not an incident-response administrator role — we are seeking an architect who will take ownership of the full data ecosystem, implement a Database-as-Code approach, and demonstrate that query optimization is the most effective way to reduce cloud costs (FinOps). Seniority & Collaboration - Senior or higher level. - Join the infrastructure team managing public cloud, on-premises systems, network, OS, and databases. Contract Duration - Engagement planned at least until the end of 2026, with possible extension. Start Date - Planned start: April 1, 2026. Wymagania: - Deep knowledge of PostgreSQL internals: MVCC, WAL, Autovacuum under load, Query Planner. - Experience with large-scale databases (2TB+): partitioning, bloat management, zero-downtime operations. - Multi-cloud and hybrid environment experience: Azure Database for PostgreSQL, Google Cloud SQL, Linux-based PostgreSQL instances. - Software engineering approaches: infrastructure as code, automation via custom tools and scripts. - Pragmatic technology mindset: selecting stable, proven solutions and making decisions based on technical and economic justification. Work Culture & Language Skills - Remote-first work with weekly presence in Poznań office (Thursday). - Polish for daily team communication, English for international projects. - Emphasis on a No-Blame Culture: focus on process improvement and automation rather than assigning blame. Codzienne zadania: - Observability & Query Insights - Build a central “Single Point of Truth” for database monitoring across the organization (Prometheus / Grafana / pganalyze). - Transition from infrastructure-level monitoring to in-depth query analysis. - Identify performance issues before they impact end-users. - Performance Engineering & FinOps - Audit key database instances for resource usage. - Optimize costs through: configuration tuning, index design, query optimization, SQL refactoring. - Support safe downsizing of cloud infrastructure (SKU Optimization). - High Availability & Disaster Recovery - Design high-availability architectures for distributed systems (microservices). - Standardize failover mechanisms. - Automate backup and recovery procedures (PITR, pgBackRest). - Build an environment with predictable, repeatable operational behavior. - Database-as-Code & Standards - Automate the database lifecycle using Terraform and Ansible. - Define architectural standards and patterns for development teams: schema design, security policies, backup and maintenance standards.
Senior Platform Engineer (Infrastructure)
Fresha
⚲ London
Wymagania
- Docker
- Kafka
- Elixir
- Ruby
About Fresha Fresha is the leading marketplace platform for beauty & wellness trusted by millions of consumers and businesses worldwide. Fresha is used by 130,000+ businesses and 450,000+ stylists and professionals worldwide, processing over 1 billion appointments to date. The company is headquartered in London, United Kingdom, with 15 global offices located across North America, EMEA and APAC. Fresha allows consumers to discover, book and pay for beauty and wellness appointments with local businesses via its marketplace, while beauty and wellness businesses and professionals use an all-in-one platform to manage their entire operations with an intuitive business software and financial technology solutions. Fresha’s ecosystem gives merchants everything they need to run their business seamlessly by facilitating appointment bookings, point-of-sale, customer records management, marketing automation, loyalty, beauty products inventory and team management. The consumer marketplace unlocks revenue potential for partner businesses by leveraging the power of online bookings and automated marketing through mobile apps and advanced integrations with major tech brands including Instagram, Facebook and Google Role overview Fresha is entering its next phase of global scale, and the Platform team plays a vital role in making that growth fast, safe, and sustainable. As a Senior Platform Engineer, you will take end-to-end ownership of how our platform and services are built, deployed, and operated. Your focus is clear: remove friction for engineers, reduce operational risk, and ensure the platform scales alongside the business. Your mission is to ensure engineers always have the right tools, patterns, and infrastructure so they can deliver faster, safer, and with confidence. We care deeply about customer impact, even when the work is internal — platform improvements ultimately exist to help Fresha deliver better experiences to customers. To foster a collaborative environment that thrives on face-to-face interactions and teamwork, all Fresha employees work from our dog-friendly office 4 days per week, with the flexibility to work remotely on Wednesday each week. London office address: The Bower, The Tower, 207 Old St, London EC1V 9NR
Senior DevOps Engineer – Platform Engineering
Link Group
⚲ Warszawa
35 000 - 46 000 PLN (PERMANENT)
Wymagania
- DevOps
- Python
- Go
- Kubernetes
- CI/CD
- IaC
- Terraform
- AWS
O projekcie: Senior DevOps Engineer – Platform Engineering We are looking for an experienced DevOps Engineer to help build and scale the core platforms powering our trading and data systems. In this role, you will design reliable, automated, and cloud-ready infrastructure, while influencing engineering standards across the organization. Why join? - High impact role shaping core platform architecture - Strong engineering culture with real ownership - Opportunity to influence how teams build and run production systems Wymagania: - 6+ years in DevOps / Platform Engineering - Strong coding skills in Python or Go or other language - Hands-on experience with Kubernetes, CI/CD, and IaC (Terraform) - Experience working with AWS and modern observability tooling Codzienne zadania: - Build and evolve shared platform services with a focus on scalability and automation - Design and operate CI/CD pipelines and infrastructure as code - Champion DevOps & SRE best practices, including observability and reliability engineering - Support Kubernetes-based environments across cloud (AWS) and on-prem systems - Partner with engineering teams to improve service stability and delivery speed
Senior DevOps Engineer
x1F
⚲ Wrocław
24 000 - 26 000 PLN (B2B)
Wymagania
- Docker
- Kubernetes
- Rancher
- Helm
- OpenShift
- Cloud
- Ansible
- AWX
- Jenkins
- DevOps
- Communication skills
- Progress Chef
- GitLab
- Puppet
- GitHub
- Kafka (nice to have)
O projekcie: Your focus at x1F You shape modern DevOps environments that allow software to move smoothly from development into production. You provide technical leadership, guide engineers and act as a trusted point of orientation in DevOps and platform topics. Together with the team, you build stable and scalable infrastructures, automate delivery processes and ensure a high level of quality, security and efficiency. Your work combines development, operations and platform engineering to create reliable systems built for growth. What we offer Community & team culture Experience events with us that connect and are fun – from relaxed get-togethers to structured team-building sessions. We believe that strong teams grow through shared moments. Flexible working & balance Work where and how it suits you: At home, in the office, or on the road at a workation. Our flexible working hours provide you with maximum flexibility while ensuring there's always some overlap with the project's core working hours. Health & well-being At x1F, your well-being truly matters. Across our Nearshore locations, we offer a variety of benefits designed to support you – both personally and professionally. Curious about what’s available in your country? Reach out to our recruiter anytime – we’re happy to guide you! Professional development We offer German lessons (cost participation) and targeted development programs to support your career growth. --> - Community & team culture: Experience events with us that connect and are fun – from relaxed get-togethers to structured team-building sessions. We believe that strong teams grow through shared moments. - Flexible working & balance: Work where and how it suits you: At home, in the office, or on the road at a workation. Our flexible working hours provide you with maximum flexibility while ensuring there's always some overlap with the project's core working hours. - Health & well-being: At x1F, your well-being truly matters. Across our Nearshore locations, we offer a variety of benefits designed to support you – both personally and professionally. Curious about what’s available in your country? Reach out to our recruiter anytime – we’re happy to guide you! - Professional development: We offer German lessons (cost participation) and targeted development programs to support your career growth. Wymagania: Your skills and experience- Qualification: Completed academic studies in Computer Science, Business Informatics, Natural Sciences or a comparable IT related training. - Experience: Sound knowledge of container technologies such as Docker, Podman, Kubernetes, Rancher and Helm. Experience with container platforms like OpenShift or Cloud Pak for Data is beneficial. - Automation Skills: Practical experience with automation tools such as Ansible, AWX or Ansible Tower, Chef or Puppet. Experience operating applications such as Kafka is an advantage. - GitOps Knowledge: Good understanding of GitOps methods and tools such as GitLab, GitHub, GitBucket or Jenkins. - Working Style: High sense of responsibility and a structured way of working. Ability to work independently and reliably in complex environments.- Leadership: Proven leadership skills and experience in leading technical teams and guiding engineers in DevOps or platform environments.- Languages and Mobility: Very good German and English skills – both at C1 level – and willingness to travel within Germany for customer projects. Strong communication skills to facilitate collaboration and understanding across teams and stakeholders. Your Personality - Analytical: You approach technical challenges with clarity and turn complexity into manageable solutions. - Communicative: Confident coordination and clear communication in client and project environments. - Reliable: Your structured way of working creates stability, trust and consistent delivery quality. #LI-Hybrid #LI-KO1 Codzienne zadania: - Designing: Support and advise customers in the design and setup of software platforms. With your experience in system architecture and DevOps practices, you help create stable and future proof environments. - Building: Assist in developing private or public cloud infrastructures. Through your technical insight, you enable scalable and secure cloud setups that fit customer requirements. - Automating: Manage large system landscapes with automation tools based on GitOps methods and technologies. With your know how, you establish repeatable workflows that simplify operations and improve quality. - Operating: Maintain and enhance platform components and deployment pipelines. Using your ability to understand complex systems quickly, you ensure smooth operations and high availability.
Senior Adaptive Autosar Engineer
Accenture Polska
⚲ Warsaw, Wroclaw, Szczecin, Katowice, Lodz
Wymagania
- C++
- QNX
- Linux
Accenture is a leading global professional services company that helps the world’s leading businesses, governments and other organizations build their digital core, optimize their operations, accelerate revenue growth and enhance citizen services. We offer solutions and assets across Strategy & Consulting, Technology, Operations, Industry X and Accenture Song. THE WORK: - Integrate, configure and maintain third party AUTOSAR Adaptive stack and components - Define and maintain Adaptive AUTOSAR Machine/Execution manifest for QNX and Linux targets - Integrate and configure AUTOSAR Adaptive components like Operating System Abstraction Layer, State Management, Platform Health Management, Persisteny, Crypto, Log and Trace and so on. - Interface with customer and internal teams Flexible: The work location for this role may include a mix of working remotely (most of the time), onsite at a client or in an Accenture office – depending on specific project circumstances. With all our roles, there is some in‑person time for collaboration, learning and building relationships with clients, peers, leaders, and communities. As an employer, we will be as flexible as possible to support your specific work/life needs.
Platform Observability Engineer (DevOps / SRE)
Amelco UK
⚲ Warszawa
800 - 1 000 PLN/dzień netto (B2B)
Wymagania
- AWS
- Prometheus
- Grafana
- CloudWatch
- Cloudflare
Role Purpose The Platform Observability Analyst ensures the performance and reliability of Amelco’s platforms through strong monitoring, alerting, and operational insight. The role focuses on observability, incident response, and proactive system stability, providing early detection of issues and supporting rapid resolution. You will work across Prometheus, Grafana, Pingdom, Cloudflare, and AWS CloudWatch to maintain system health and service reliability. Key Responsibilities Monitoring and Observability • Own and maintain dashboards for system health, performance, and uptime. • Manage Prometheus, Grafana, Pingdom, Cloudflare, and AWS CloudWatch monitoring. • Manage alerts, adjust thresholds, and configure notifications in line with operational SLAs. • Monitor system metrics and logs proactively. Incident and Event Response • Respond to system alerts and operational issues. • Take immediate action on critical incidents, mitigate medium-impact issues, and escalate major events to Incident Management as needed. • Document all alerts, actions, and resolutions. Proactive System Insights • Identify trends and early warning signs in system performance. • Recommend improvements for monitoring, alerting, and operational efficiency. • Support post-incident reviews and maintain operational documentation and runbooks. Collaboration and Communication • Work closely with L2/L3 Support, Incident Management, and DevOps teams. • Provide clear technical insights to stakeholders. • Deliver structured shift handovers and summaries for continuity. The role involves an evening and overnight shift pattern. Required Skills and Experience • Experience with Prometheus, Grafana, and alerting pipelines. • Operational knowledge of AWS services (EC2, ECS/EKS, RDS, S3, CloudWatch). • Experience with Pingdom, Cloudflare, or similar uptime/performance monitoring tools. • Understanding of distributed systems, microservices, and cloud-native architecture. • Experience with log aggregation or observability tools (ELK, Loki, etc.). • Strong analytical mindset and problem-solving skills. • Clear documentation and communication skills. Desirable Skills • Experience in gaming, fintech, or high-availability environments. • SQL skills for metrics and log analysis. • Knowledge of SRE principles and observability best practices. • Automation experience for monitoring or alerting workflows. What We Offer • Competitive contractor rates. • B2B open-ended contract. • Hybrid model (Mondays and Fridays from home). • Full-time job and long-term working possibilities. • Exposure to modern cloud and observability tooling. • Opportunity to shape platform monitoring and reliability practices. • Strong collaboration with platform, DevOps, and operations teams. • Clear progression path toward SRE or Platform Engineering roles. • Knowledge-sharing opportunities. • Dynamic culture surrounded by industry experts. • Enthusiastic and energetic working environment. • Flat structure. • No dress code. Sounds good? Please submit your CV in English by using the " Apply " button.
Mid DevOps Engineer
OFERTEO SPÓŁKA AKCYJNA
⚲ Wrocław
14 000 - 17 000 zł brutto
Wymagania
- Docker
- Kubernetes
- GitLab CI
- GitHub Actions
- Jenkins
- AWS
- Google Cloud Platform
- Azure
- Terraform
- Ansible
- CloudFormation
- Bash
- Python
- SQL
- Helm (nice to have)
- Grafana (nice to have)
- Prometheus (nice to have)
- ELK Stack (nice to have)
- Ansible (nice to have)
- PostgreSQL (nice to have)
- MySQL (nice to have)
- MSSQL (nice to have)
- Rozumienie składni skryptów SQL.
Zakres obowiązków: - Szukamy osoby, która ma już solidne podstawy w świecie DevOps i chce wejść na wyższy poziom, współtworząc infrastrukturę dla dużej platformy e-commerce. W Oferteo nie ograniczasz się do prostego utrzymania – będziesz realnie wspierać rozwój skalowalnych systemów, pracując ramię w ramię z doświadczonymi specjalistami. Jeśli szukasz miejsca, gdzie Twoja praca przekłada się na stabilność i wydajność dużego serwisu, a złożoność projektów daje ogromne pole do rozwoju – pogadajmy. Oferujemy: - Realny wpływ na architekturę: Nie "klepiesz" gotowych ticketów. Jesteś partnerem do rozmów o tym, jakich narzędzi używamy i jak budujemy naszą infrastrukturę. - Wynagrodzenie i KUP 50%: Stabilna podstawa + premie kwartalne (przy UoP z autorskim KUP50) - Budżet szkoleniowy: Inwestujemy w Twój rozwój – masz dedykowane środki na szkolenia, certyfikacje czy konferencje. - Wsparcie dla głowy i ciała: Prywatna opieka medyczna, karta sportowa oraz stały dostęp do wsparcia psychologicznego online. - Klimat i komfort: Dofinansowanie do posiłków, owoce, kawa i strefa chillu w naszym wrocławskim biurze. Robimy też integracje, po których faktycznie jeszcze bardziej chce się dalej ze sobą pracować.
Lider Techniczny / Liderka Techniczna Utrzymania IT (Linux / Cloud / DevOps)
QLOS sp. z o.o.
⚲ Łódź, Górna
Wymagania
- Bash
- Git
- Docker
- Terraform
- MySQL
- PostgreSQL
- MariaDB
- Zabbix
- AWS
- CentOS
- Ubuntu
- Debian
- Red Hat
- Python
- PHP
- Kubernetes
- Jira
- SaltStack
Nasze wymagania: minimum 3 lat doświadczenia w administracji i utrzymaniu środowisk serwerowych opartych o systemy Linux, chmurę publiczną lub infrastrukturę hybrydową, doświadczenie w pracy na styku: utrzymania systemów IT, występowaniaincydentów, zmian, monitoringu i kontaktu z klientem, umiejętność oceny jakości rozwiązania technicznego i jakości wykonania pracy przez zespół, umiejętność nadawania priorytetów, podejmowania decyzji operacyjnych i brania odpowiedzialności za kierunek działań, bardzo dobra znajomość systemów Linux oraz praktyczna znajomość co najmniej części stosu: AWS, Docker, Kubernetes, Terraform, Zabbix, Bash, Git, MySQL / PostgreSQL, umiejętność analizowania incydentów i odróżniania objawów od przyczyn problemu, umiejętność porządkowania pracy zespołu, egzekwowania standardów oraz udzielania jasnego feedbacku, komunikatywność oraz swoboda w prowadzeniu komunikacji pisemnej i ustnej, znajomość języka angielskiego pozwalająca swobodnie korzystać z dokumentacji techniczne i komunikacji. Mile widziane: doświadczenie w roli lidera technicznego, team leada lub starszej osoby wspierającej zespół operacyjny, znajomość ITIL / ITSM, incident management, change management lub service delivery, doświadczenie w środowiskach powiązanych z działaniem aplikacji webowych, doświadczenie z backupem, disaster recovery, high availability oraz optymalizacją kosztów środowisk cloud, użycie narzędzi do zarządzania zadaniami oraz wiedzą (Jira/Confluence/Clickup). Zakres obowiązków: koordynowanie pracy działu utrzymania IT oraz nadawanie właściwych priorytetów zgłoszeniom, alertom i pracom planowym, nadzór nad jakością pracy zespołu L1 i L2, w tym weryfikacja skuteczności wykonanych działań oraz jakości komunikacji z klientem, wspieranie zespołu w analizie trudniejszych problemów technicznych oraz eskalacji do wyższych linii wsparcia, rozwijanie i porządkowanie procesu triage, eskalacji, obsługi incydentów oraz prac planowych, tworzenie, standaryzowanie i egzekwowanie procedur, checklist i zasad pracy operacyjnej, ograniczanie liczby powtarzalnych problemów poprzez eliminowanie przyczyn źródłowych, a nie tylko skutków, rozwijanie automatyzacji i narzędzi wykorzystywanych przez dział utrzymania, review planów zmian i ocena ryzyka technicznego przed wdrożeniem zmiany, rozwój pracowników poprzez feedback, mentoring i podnoszenie samodzielności zespołu. Oferujemy: realny wpływ na sposób działania działu utrzymania i standard pracy zespołu, pracę z nowoczesnymi technologiami: chmura publiczna, konteneryzacja, automatyzacja, środowiska wysokiej dostępności i infrastruktura hybrydowa, dużą samodzielność i przestrzeń do podejmowania decyzji, wsparcie doświadczonych specjalistów technicznych, szkolenia we wdrażanych przez nas technologiach, kartę Multisport oraz dofinansowanie szkoleń i kursów.
Junior Azure Data Engineer with Databricks (AI&Data)
Accenture Polska
⚲ Warsaw, Krakow, Wroclaw, Poznan, Gdansk, Bialystok
Wymagania
- Azure
- PySpark
- SQL
In this role, you will be part of the team driving digital transformation, where you’ll get hands‑on experience with GenAI and LLM in a production environment from day one. You will have the opportunity to learn by doing and contribute to shaping the architecture from the ground up alongside experienced senior specialists. Why join us? You will work with market experts in a culture built on knowledge sharing and strong technical foundations. Your development will be supported, with access to certifications and training backed by technology partners. We focus on outcomes rather than time spent logged in, ensuring a healthy work‑life balance and flexibility aligned with your needs. Our recruitment process is designed to be clear, efficient, and respectful of your time (HR+skill+confirming). THE WORK: - Support in building and maintaining cloud‑based data pipelines in Azure and Databricks. - Assist in extracting, transforming, and integrating data from multiple data sources into central platforms. - Apply knowledge of data modeling fundamentals, including 3NF and star/snowflake schemas. - Use SQL and basic PySpark to perform data manipulation, transformation, and preparation. - Collaborate closely with senior engineers on the implementation of scalable data solutions. Flexible: The work location for this role may include a mix of working remotely (most of the time), onsite at a client or in an Accenture office – depending on specific project circumstances. With all our roles, there is some in‑person time for collaboration, learning and building relationships with clients, peers, leaders, and communities. As an employer, we will be as flexible as possible to support your specific work/life needs.
Inżynierka Chmurowa / Inżynier Chmurowy (Cloud Engineer, AZURE Engineer)
VeloBank S.A.
⚲ Warszawa, Wola
Wymagania
- Azure DevOps
- Jenkins
- Terraform
- Kubernetes
- Google Cloud Platform
Nasze wymagania: doświadczenie: Minimum 2 lata w zarządzaniu infrastrukturą produkcyjną, Azure: Dobra znajomość usług Microsoft Azure w szczególności AKS, Network, Security, narzędzia: Praktyczna znajomość Terraform, postawa: Samodzielność, analityczne myślenie i chęć nauki, języki: Angielski (min. B2) oraz biegły polski. Mile widziane: certyfikaty: AZ-104 (Azure Administrator), CKA (Kubernetes) oraz certyfikacje z Terraform, doświadczenie w sektorze finansowym i znajomość regulacji bezpieczeństwa. O projekcie: Nasz zespół odpowiada za infrastrukturę i operacje w nowoczesnym środowisku chmurowym. Aktywnie zarządzamy infrastrukturą opartą na Microsoft Azure, pracując w modelu DevSecOps. Będziesz odpowiedzialny za budowę, utrzymanie i bezpieczeństwo fundamentów naszej platformy Microsoft Azure. Twoim głównym zadaniem będzie dostarczanie oraz utrzymanie stabilnej infrastruktury sieciowej i systemowej przy użyciu nowoczesnych metod automatyzacji. Zakres obowiązków: infrastruktura w Kodzie (IaC): Implementacja i optymalizacja infrastruktury Azure przy użyciu narzędzia Terraform. Rozwijanie i utrzymywanie repozytoriów kodu oraz dbanie o spójność środowisk, automatyzacja i CI/CD: Projektowanie i rozwój potoków w Azure DevOps oraz Jenkins w celu automatycznego wdrażania zmian (CI/CD). Automatyzacja powtarzalnych zadań administracyjnych przy użyciu skryptów i narzędzi natywnych, observability-as-Code: Automatyzacja monitoringu i logowania (Azure Monitor, Log Analytics) bezpośrednio w kodzie infrastruktury, konteneryzacja i PaaS: Administrowanie infrastrukturą pod klastry Kubernetes (AKS) oraz inne usługi PaaS, Cloud Governance & Security: Wdrażanie polityk zgodności (Azure Policy) oraz zarządzanie dostępem (RBAC, PIM). Oferujemy: Możliwość rozwoju w #świeciebardziejvelo, gdzie na co dzień kierujemy się #otwartością, #zaufaniem i #rzetelnością, a #przedsiębiorczość mamy w swoim DNA. 😊 Pracę, w której stosujemy podejście eXtreme agile, szukając optymalnych rozwiązań, często nieoczywistych, nieszablonowych i dlatego osiągamy ponadprzeciętne rezultaty. Udział w inicjatywach społecznych wspierających odpowiedzialność organizacji - m.in. inicjatywa „Sadzimy i chronimy razem #VeloLas”. 🌲 VeloActive, wspieramy również Twoje zdrowie. Do pracy możesz dojechać rowerem – mamy strefę dla rowerzystów. Bezpieczeństwo finansowe - stabilne zatrudnienie w oparciu o umowę o pracę, wzbogacone kwartalnym systemem premiowym. Platformę, która wesprze Twój dobrostan, zdrowie psychiczne i fizyczne, dzięki bezpłatnym konsultacjom z: doradcą finansowy, prawnym, psychologiem, psychoterapeutą, coachem czy mentorem biznesu. Pracę w systemie hybrydowym ok 1-3 razy w tyg z biura. Program opieki zdrowotnej dla pracowników i ich rodzin, ubezpieczenie na życie, kartę sportową. Możliwość nauki języka angielskiego czy niemieckiego na platformie internetowej. Atrakcyjną ofertę pracowniczą na produkty bankowe #Velo.
Inżynier/ ka DevOps
NASK
⚲ Warszawa, Wola
9 000–12 000 zł brutto / mies.
Wymagania
- Terraform
- Ansible
- Linux
- Bash
- GitLab CI
- GitLab Runners
- VMware Tanzu
- NSX-T
- vRealize Log Insight
- GitOps
- ArgoCD
- Grafana
- Prometheus
- Keycloak
- PostgreSQL
- Redis
- Cassandra
Nasze wymagania: Min. 4 lata doświadczenia w IT, w tym min. 3 lata w DevOps/Admin Praktyczne doświadczenie w zarządzaniu klastrami (nie tylko wdrażanie aplikacji) Znajomość różnic między klastrami zarządzanymi a self-managed Umiejętność debugowania containerd i rozwiązywania problemów z warstwą uruchomieniową Biegłość w Terraform oraz Ansible Głęboka znajomość Linuxa oraz solidne podstawy sieciowe (SDN, reguły firewall, load balancing) Umiejętność pisania zaawansowanych skryptów w Bash Znajomość GitLab CI Umiejętność zarządzania infrastrukturą GitLab Runners oraz optymalizacja czasów budowania Umiejętność tworzenia i utrzymywania własnych Helm Charts oraz optymalizacja Dockerfile dla różnych technologii Mile widziane: Doświadczenie z VMware Tanzu (TKG), NSX-T oraz vRealize Log Insight Praktyka we wdrażaniu modelu GitOps i obsługa ArgoCD do synchronizacji stanu klastrów Umiejętność budowania dashboardów w Grafana i tworzenia reguł alertowania w Prometheus Znajomość narzędzi do zarządzania sekretami oraz certyfikatami Podstawy Keycloak Doświadczenie w administracji i backupowaniu baz danych: PostgreSQL, Redis, Cassandra oraz brokerów wiadomości Znajomość rejestru Harbor, w tym konfiguracja skanowania podatności i zarządzanie retencją obrazów Umiejętność prowadzenia analizy Root Cause Analysis (RCA) Gotowość do reagowania na krytyczne incydenty Zakres obowiązków: Projektowanie, wdrażanie i optymalizacja potoków Continuous Integration oraz Continuous Deployment w GitLab CI, tworzenie reużywalnych szablonów, integracja z procesami testowymi i wdrożeniowymi, edukowanie zespołów developerskich w zakresie efektywnego korzystania z GitLab CI Konfiguracja i utrzymanie ekosystemu Tanzu, zarządzanie cyklem życia klastrów oraz integracja z istniejącą infrastrukturą, monitoring zużycia zasobów przez kontenery, dobieranie odpowiednich limitów oraz optymalizacja kosztów utrzymania platformy Tanzu Konfiguracja kontrolerów oraz polityk sieciowych w celu zapewnienia bezpiecznej komunikacji między mikroserwisami Projektowanie, wdrażanie i administracja klastrami Kubernetes w środowiskach produkcyjnych Budowanie i utrzymywanie skalowalnej infrastruktury Administracja i rozwój środowisk Konfiguracja oraz utrzymanie systemów monitorowania, logowania i alertowania w celu zapewnienia stabilności systemów produkcyjnych Implementacja mechanizmów bezpieczeństwa na każdym etapie wytwarzania oprogramowania Diagnozowanie i rozwiązywanie incydentów infrastrukturalnych oraz optymalizacja kosztów i wydajności systemów Bliska współpraca z programistami w zakresie doboru technologii, automatyzacji środowisk lokalnych oraz wsparcia procesu wdrażania zmian Tworzenie i aktualizacja dokumentacji technicznej dotyczącej architektury systemowej oraz procedur Disaster Recovery
Implementation Engineer
Silent Eight
⚲ 100% Remote, Warszawa, Kraków, Wrocław, Poznań, Katowice, Gdańsk, Białystok, Lublin, Rzeszów
11 000 - 14 000 PLN (B2B)
Wymagania
- CI/CD
- Jenkins
- Kubernetes
- Linux
- Grafana
- Prometheus
- Scripting language
- Python
- Bash
- PowerShell
- Jira
- Cloud platform
- AWS
- AI
- Infrastructure as Code (nice to have)
- Ansible (nice to have)
- Terraform (nice to have)
O projekcie: At Silent Eight, we develop our own AI-based products to combat financial crimes that enable things like money laundering, the financing of terrorism, and systemic corruption. We’re a leading RegTech firm working with large international financial institutions such as Standard Chartered Bank and HSBC. Join us and help make the world a safer place! We are looking for a professional Implementation Engineer whose main objective will be to ensure the successful deployment, integration, and stable operation of Silent Eight solutions in customer environments, along with providing expert technical support. We offer - Engagement type: B2B contract - Competitive salary - Paid days to balance your work and life in a way that suits you best - Location: Fully-remote, forever - Career growth: Promotion and great development opportunities within the organization - Passionate leadership - Collaborative team - The chance to be an integral part of an award-winning, fast-growing RegTech company that values a friendly working atmosphere. Wymagania: - Experience with CI/CD tools (Jenkins or similar), basic Kubernetes operations, and proficiency in Linux environments. - Familiarity with observability and monitoring tools (e.g., Grafana, Prometheus). - Ability to use scripting languages (e.g., Python, Bash/PowerShell) for task automation. - Capability to diagnose issues across application, infrastructure, and data layers. - Experience managing tasks, reporting progress/risks, and following priorities in systems like Jira. - Exposure to cloud platforms (AWS, Azure, GCP) and experience with Infrastructure as Code (Terraform, Ansible) is a strong plus. - Strong communication and teamwork skills to collaborate effectively with developers, architects, PMs, and clients. - Ability to communicate clearly with customers in English (mandatory). - Confident use of AI to speed up troubleshooting, documentation, and problem analysis. - Curiosity and willingness to learn and explore new technologies. Codzienne zadania: - Deployment and Configuration: Deploy and configure Silent Eight solutions in customer environments (SIT, UAT, PROD), ensuring successful integration according to project requirements and the Statement of Work (SOW). - Technical Support (L2): Troubleshoot and resolve technical issues (including escalated service desk tickets), perform Root Cause Analysis (RCA), and implement fixes or workarounds in collaboration with solution development teams. - Client and Team Collaboration: Work closely with Solution Architects, Project Managers, and client IT teams during deployments, solution validation, and the onboarding process to ensure smooth handover and alignment. - Continuous Improvement & DevOps: Participate in optimizing internal processes and tooling. Support DevOps-related tasks, including CI/CD automation, deployment scripts, and environment setup for internal tools. - Documentation and Knowledge: Contribute to the creation and maintenance of technical documentation (installation guides, runbooks, troubleshooting playbooks) and gather client feedback to identify process improvements. - On-Call Support: Be flexible to support urgent escalations (Severity 1 incidents) outside of regular business hours (on-call duty).
DevOps Engineer (DataBase)
Sopra Steria Poland
⚲ Katowice
16 800 - 23 520 PLN (B2B)
Wymagania
- Python
- Bash script
- Oracle
- PostgreSQL
- AWS
- Amazon RDS
- AWS EC2
- VPC
- IAM
- AWS S3
- Amazon EKS
- ECS
- AWS ECS
- CloudWatch
- CloudTrail
- Kubernetes
- Infrastructure as Code
- Terraform
- CloudFormation
- Ansible
- Jenkins
- Virtualization
- VMware
- KVM
- Clearance (nice to have)
O projekcie: We are looking for DevOps Engineer (DataBase) to join Sopra Steria Polska and one of our innovative international squads located in Belgic and Poland. You will play a pivotal role in the delivery of our database services across cloud and on-premises environments. This team is responsible for designing, implementing and maintaining the full lifecycle of database solutions. Note that we can only offer cooperation to people who are located in Poland and have EU citizenship. Tech stack on the project: Python, Oracle, PostgreSQL, AWS and Kubernetes. Additional Information What we offer: - BENEFITS (UoP): Luxmed, Medicover Sport, Worksmile, educational platforms, languages learning platform, referral bonus, copyrights, life insurance, workation - DEVELOPMENT OPPORTUNITIES (UoP and B2B): certifications (paid by the company), conferences, Tech Lunches, possibility to join our Communities (Project Management, Architecture, Security, Process Management, Leadership, AI and Cloud) The recruitment process for this position consists of two stages: - a short phone call with a recruiter (30 min max) - one-hour long interview on Teams (with both general and technical questions) Salary range: UOP: 12 000-16 000 PLN gross/month. B2B: 100-140 PLN net/h All information about salary range and its additional components will be provided during the 1st stage of recruitment process. At our organization, we are committed to fighting against all forms of discrimination. We foster a work environment that is inclusive and respectful of all differences. Company Description Sopra Steria is one of the largest players in the tech industry in Europe, known for its consulting, digital services and software development. We operate in nearly 30 countries in the world, hiring more than 55,000 employees. The Polish branch, as the Global Delivery Center, operates in Katowice since 2007 and has been growing ever since. Currently, our team consists of around 1,000 specialists. Within the Digital Platform Services department, our teams specialize in areas such as cloud, operating systems, virtualization, databases, backup or storage, as well as networking and security. We also have 1st line support consultants who speak French and English, but also Italian, Spanish, Portuguese and German. The Application Services department is responsible for areas such as software development, data engineering, testing services, CRM, ITSM and ERP platform integrations, as well as application management for customers in Scandinavia, Benelux, France, Germany, Switzerland and the UK. Wymagania: Must have requirements: - Strong proficiency in Python and experience with Bash scripting. - Solid hands‑on experience in Database Operations, especially with Oracle and PostgreSQL. - Extensive experience working with AWS core services, including: RDS, EC2, VPC, IAM, ALB/NLB, S3, EKS/ECS, CloudWatch, CloudTrail. - Practical experience with Kubernetes and containerized environments. - Experience with Infrastructure as Code (IaC) tools such as Terraform or CloudFormation. - Knowledge of configuration management tools like Ansible. - Experience in CI/CD processes using GitLab or Jenkins. - Familiarity with on‑premise virtualization technologies, such as VMware, Hyper‑V, KVM, Nutanix. - EU citizenship - Fluent English: B2/C1. - Being open to occasional business trips abroad and visits in our office in Katowice. Nice to have requirements: - Clearance Codzienne zadania: - Collaborate with team members to write efficient, reusable, and maintainable code. - Contribute to the continuous improvement of existing services by identifying opportunities for optimization. - Write unit tests and integration tests where necessary to increase the confidence during development and deployment of the code. - Collaborate with team members to build, maintain, and optimise pipelines (build, test, deploy, rollback) for database service automation code in GitLab. - Develop and manage Kubernetes Custom Resource Definitions (XRDs) and utilise crossplane to deploy databases across cloud and on-prem environments. - Utilise Infrastructure as Code (IaC) and scripting to automate environment provisioning, configuration, patching, backups, DR testing, and routine operational tasks. - Suggest and implement automation of deployment and configuration tasks, including automated deployment strategies (blue/green, canary). - Document infrastructure, code, scripts, and configurations to ensure maintainability and knowledge transfer within the squad. - Administer and automate Oracle and PostgreSQL operational tasks, including migrations, backups/restores, replication, upgrades, and performance tuning in both cloud and on-prem setups