Oferty pracy
Filtry
Security Backend Engineer - SOLO
Deutsche Telekom IT Solutions Hungary
⚲ Budapest, Debrecen, Szeged, Pécs
8 920 - 17 620 PLN (PERMANENT)
Wymagania
- Studies/ professional training
- Cloud
- Linux
- Security
- Penetration testing
- Scanning/assessment/automation tools
O projekcie: Company Description As Hungary’s most attractive employer in 2025 (according to Randstad’s representative survey), Deutsche Telekom IT Solutions is a subsidiary of the Deutsche Telekom Group. The company provides a wide portfolio of IT and telecommunications services with more than 5300 employees. We have hundreds of large customers, corporations in Germany and in other European countries. DT-ITS recieved the Best in Educational Cooperation award from HIPA in 2019, acknowledged as the the Most Ethical Multinational Company in 2019. The company continuously develops its four sites in Budapest, Debrecen, Pécs and Szeged and is looking for skilled IT professionals to join its team. Job Description Your Department we don’t just operate a platform — we shape the future of secure, scalable, open-source cloud technology. Powered by agile ways of working, lean structures, and a passionate, high-performance team, we thrive in one of the most dynamic environments in the industry. If you’re a cloud enthusiast and a hands-on professional ready to tackle complex challenges, you’ll feel right at home with us. Why you will love working here? - Advance your career on a major European cloud platform — gain hands-on experience with cutting-edge, hyper-scale cloud technologies that will strengthen your skills and make you a sought-after cloud expert. - Turn ideas into action with a vibrant, international team — collaborate with colleagues who thrive on creativity, excellence, and delivering customer satisfaction. * Please be informed that our remote working possibility is only available within Hungary due to European taxation regulation. Wymagania: Your superpowers: - Completed studies in a technical, engineering or scientific subject or comparable professional training. - 3-5 years of professional experience in IT ideally with cloud security and/or penetration testing. - Relevant certification in IT security (such as OSCE, OSEP, OSEE, OSCP, CCSP, CISSP, etc.) - Hands-on experience with penetration testing and averted attacks. - Knowledge of relevant scanning/assessment/automation tools. - Strong experience in Linux and network related services. - High level of customer focus. - Knowledge of agile development processes. - Fluency in written and spoken English. - You will be working in the European Union to meet our customers' data security and privacy requirements. Codzienne zadania: - Preventively perform penetration tests and actively combat attacks. - Being a driver for automation with regards to security testing. - Analyze and mitigate data and cyber risks. - Be a trusted advisor on information and IT security. - Work in a team of specialists where everyone helps each other in an open and trusting manner.
Product Manager API & DEV Tools (m/f)
Shoper S.A.
⚲ Kraków, Poznań, Bydgoszcz
15 000 - 18 000 PLN netto (B2B) | 13 000 - 16 000 PLN brutto (UoP)
Wymagania
- REST API
- API
- Webhooki
- Dokumentacja techniczna
- CLI
Cześć! Dołącz do Grupy Shoper! Jesteśmy jedną z największych platform e-commerce w Polsce.Projekt do którego dołączysz będzie obejmował rozwój i utrzymanie platformy API oraz narzędzi deweloperskich wspierających ekosystem integracji. Obejmując to stanowisko będziesz odpowiadać za projektowanie i rozwój REST API, webhooków oraz dbanie o ich spójność, wydajność i stabilność, a także za przygotowanie i utrzymanie dokumentacji technicznej zgodnej z realnym zachowaniem API. Dlaczego warto z nami pracować? • Będziesz pracować z największymi e-commercami w Polsce. • Wspieramy rozwój pracowników – zapewniamy szkolenia wewnętrzne oraz zewnętrzne. • Nasze biuro to przestrzeń sprzyjająca kreatywności i zaangażowaniu, a zespół to doświadczeni profesjonaliści gotowi na każde wyzwanie. • Lubimy dzielić się wiedzą nie tylko ze sobą, ale również z naszymi partnerami i klientami. • Naszymi wartościami są: rozwój, partnerstwo, inspiracja, zaangażowanie. Jeśli się z nimi utożsamiasz to znaczy, że jesteśmy miejscem dla Ciebie. Czym będziesz się zajmować? • Odpowiedzialność za rozwój API oraz produktów i narzędzi deweloperskich (m.in. REST API, Webhooki, dokumentacja, CLI). • Zarządzanie zespołem produktowo-technicznym (wyznaczanie kierunku i celów zespołu, ustalanie priorytetów, bieżąca współpraca z engineeringiem, codzienne wsparcie zespołu). • Zarządzanie backlogiem i roadmapą w środowisku wielu interesariuszy (inne zespoły produktowe, biznes, partnerzy, developerzy zewnętrzni). • Definiowanie i egzekwowanie priorytetów produktowych w oparciu o dane, wpływ biznesowy i potrzeby platformy. • Definiowanie, monitorowanie i optymalizacja metryk API (produktowych, biznesowych, technicznych). • Odpowiedzialność za developer experience (DX) (onboarding developerów, spójność i jakość API, czytelność błędów i dokumentacji, skracanie time-to-first-integration). • Współpraca z innymi zespołami produktowymi i technologicznymi przy rozwoju integracji, projektowaniu wspólnych rozwiązań, skalowaniu ekosystemu partnerów. • Współtworzenie i realizacja strategii rozwoju API (versioning, backwards compatibility,deprecations, standardy projektowe). • Odpowiedzialność za publiczną dokumentację techniczną oraz jej spójność z realnym zachowaniem API. • Współpraca z partnerami technologicznymi i zbieranie feedbacku od developerów (support, zgłoszenia, community). Co jest dla nas ważne? • Minimum 3 letnie doświadczenie w pełnieniu roli Product Managera / Product Ownera w zespole rozwijającym API oraz produkty i narzędzia deweloperskie (np. CLI, webhooks, dokumentację). • Umiejętność zarządzania backlogiem i priorytetami w środowisku z wieloma interesariuszami (biznes, zespoły produktowe, engineering, partnerzy). • Umiejętność definiowania, monitorowania i optymalizacji metryk API: produktowych, biznesowych oraz technicznych. • Doświadczenie we współpracy współpracy z innymi zespołami produktowymi i technologicznymi przy rozwoju integracji, ekosystemu partnerów oraz powiązanych produktów. • Bardzo dobra znajomość REST API oraz mechanizmów Webhooków. • Doświadczenie w projektowaniu, rozwijaniu lub utrzymywaniu API o wysokiej skali użycia. • Doświadczenie w tworzeniu i utrzymywaniu publicznej dokumentacji technicznej (np. dla developerów, partnerów, integratorów). • Doświadczenie w pracy z publicznym lub partnerskim API. • Zrozumienie zagadnień versioningu, kompatybilności wstecznej i migracji integracji. • Umiejętność pracy na styku biznes – engineering – developerzy zewnętrzni. • Samodzielność w podejmowaniu decyzji produktowych i umiejętność obrony priorytetów. Mile widziane: • Znajomość koncepcji narzędzi CLI (praktyczne doświadczenie). Co zyskasz? • Opiekę medyczną Luxmed. • Pracę w systemie hybrydowym w naszych biurach lub zdalną. • Masaże w biurach. • Ubezpieczenie grupowe na życie. • System kafeteryjny (m.in.: kartę Multisport, bilety do kina, vouchery, zniżki do sklepów). • Dostęp do platformy eTutor (a także możliwość uczestnictwa w lekcjach online j.angielskiego). • ShoperUp! - unikatowy program rozwoju kompetencji wewnątrz organizacji. • Dofinansowanie dojazdów do pracy. • Eventy integracyjne. • Gry wideo w biurach. • Owocowo-warzywny dzień w biurze. • Świadczenia z ZFŚS m.in. dofinanowanie do wypoczynku, dodatki świąteczne. • Dzień wolny na urodziny. • Grupowe aktywności po pracy: piłka nożna czy siatkówka. • Budżet szkoleniowy. • Płatny program rekomendacji pracowników. • Sklep na Shoper za 1 zł (!).
Lead Machine Learning Engineer
Bondora
⚲ Tallinn
Wymagania
- Python
- MLflow
- Databricks
- Docker
- Kubernetes
About Bondora At Bondora, our mission is clear: to empower people to enjoy life more while alleviating the stress of managing finances. Founded in 2008, Bondora has proudly served a diverse community of over 1 million customers for an impressive 16 years. As a rapidly growing financial technology company, we’re reaching new heights with a bold vision. We’re set to acquire a banking license, unlocking a world of possibilities for our customers. Our investment product is available Europe-wide, and we’re actively expanding our loan product footprint to 11 new countries. Join us on this journey and let’s build the future of finance together! What is this role about? As the Lead Machine Learning Engineer at Bondora, you will be the backbone of our Data Science delivery. Your mission is to build the robust infrastructure that powers automated model pipelines, ensures deployment reliability, and governs the full ML lifecycle from experimentation to production. This is a strategic and hands on engineering role. You will collaborate closely with Data Science, Data Engineering, and Development teams to remove friction, improve scalability, and bring stable, high quality ML solutions into everyday decision making. You will guide the evolution of our ML engineering stack, lead high impact initiatives, and mentor engineers while shaping a culture of technical excellence 🌿 Your main responsibilities 🎯 - Guide the technical direction of Bondora’s ML engineering stack by selecting, evaluating, and implementing technologies, tools, and processes that improve scalability and reliability. - Lead complex, high risk, or cross departmental projects that directly influence Data Science delivery, risk model performance, and production stability. - Act as the bridge between Data Science, Data Engineering, and Development to identify and solve systemic technical challenges. - Design and build advanced, production grade ML infrastructure and set the engineering standard for the team. - Ensure all ML solutions are secure, observable, resilient, and scalable, following governance, compliance, and operational best practices. - Mentor ML Engineers through code reviews, design sessions, and hands on technical leadership. - Identify weaknesses or inefficiencies in model or data infrastructure and drive company wide improvements. - Represent the ML Engineering team in technical discussions and communicate architectural decisions clearly to stakeholders.
Integrator / Integratorka rozwiązań IT
HIBERUS POLAND SPÓŁKA Z OGRANICZONĄ ODPOWIEDZIALNOŚCIĄ
⚲ Wrocław
10 500 - 11 760 zł netto (+ VAT)
Wymagania
- Kubernetes
- Linux
- Red Hat
- Windows
- SQL
- WebLogic (nice to have)
- WebSphere (nice to have)
- Tomcat (nice to have)
- Bash (nice to have)
- Python (nice to have)
- Kibana (nice to have)
O firmie: - Jesteśmy częścią hiberus - jednej z wiodących firm technologicznych w Hiszpanii z obecnością w ponad 14 krajach, zatrudniającej ponad 3 000 specjalistów i obsługującej Klientów na całym świecie. - Jesteśmy profesjonalistami, którzy posiadają wiele lat doświadczenia w takich obszarach jak: IT, BI, zarządzanie projektami i przedsiębiorstwami. Cechuje nas wysoka jakość i efektywność realizowanych projektów poprzez właściwe dopasowanie kandydata do profilu poszukiwanego stanowiska i kultury organizacyjnej panującej w firmie. Obecnie współpracujemy z prestiżowymi instytucjami w obszarach bankowości, finansów, ubezpieczeń, farmacji, ochrony zdrowia i turystyki, zarówno w Polsce, jak i za granicą. - Partnerstwo, rzetelność i transparentność – to wartości, którymi kierujemy się we wszystkich naszych działaniach. Zakres obowiązków: - Zarządzanie paczkami aplikacji oraz procesami wdrożeniowymi z wykorzystaniem narzędzi takich jak Jira, Jenkins, Git oraz SVN - Automatyzacja wdrożeń aplikacji oraz rozwój i utrzymanie procesów CI/CD - Tworzenie i utrzymanie skryptów automatyzujących weryfikację funkcjonalną oraz integracyjną systemów (m.in. SoapUI, JMeter) - Analiza, diagnozowanie i rozwiązywanie problemów oraz błędów występujących w aplikacjach na środowiskach produkcyjnych i nieprodukcyjnych - Zarządzanie serwerami aplikacyjnymi oraz aplikacjami na nich zainstalowanymi - Analiza i ocena nowych projektów oraz funkcjonalności pod kątem technicznym i wdrożeniowym - Współpraca i kontakt z dostawcami usług oraz oprogramowania Oferujemy: - Praca hybrydowa (2x w tygodniu we wrocławskim biurze) - Karta Multisport - Prywatna opieka medyczna
IAM DevOps / Platform Engineer
Antal
⚲ Kraków, Warszawa
25 200 - 33 600 PLN (B2B)
Wymagania
- GCP
- IAM
- Kubernetes
- Docker
- Terraform
- Neo4j
- PAM
- Kafka
O projekcie: IAM DevOps / Platform Engineer We are looking for an experienced IAM-focused DevOps Engineer to design and operate secure, scalable delivery pipelines and cloud infrastructure supporting identity and access management platforms. - Type of contract: B2B (hourly rate, no paid days off) - Hybrid model: 6 days per month from the office (Cracow/ Warsaw) Wymagania: Required Technical Skills - Infrastructure as Code: Terraform (core), GCP provisioning, policy-as-code. - Containers & Kubernetes: Docker, Kubernetes (GKE), Helm or Kustomize. - CI/CD Engineering: pipeline design, artifact management, automated testing, release strategies. - Streaming/Data Platforms: Kafka or Pub/Sub (fundamentals, schema registry, monitoring). - Graph Platforms: Neo4j (basics, backup/recovery, GDS/APOC, observability). - DevSecOps & Security: IAM concepts, vulnerability scanning, network security, compliance controls. - Release & Governance: release operations, change management, documentation standards. - Regional deployments: data residency, failover strategies, conditional access controls. Experience & Competencies - Hands-on experience in IAM and/or Privileged Access Management environments. - Background in large-scale, global enterprise IT ecosystems. - Strong troubleshooting and analytical skills. - Experience working in Agile/Scrum (Jira, Jira Service Desk). - Ability to work across distributed, multi-country teams. - Clear communication and technical documentation skills. - Self-driven, adaptable, and comfortable with evolving technologies. Codzienne zadania: - Design and maintain CI/CD pipelines for IAM services, connectors, policies, and integrations. - Automate testing, security scanning, and controlled releases across DEV/TEST/PROD. - Develop and manage Infrastructure as Code using Terraform and Ansible. - Provision and maintain GCP infrastructure, identity components, and supporting platforms. - Automate identity lifecycle processes (provisioning, deprovisioning, integrations). - Deploy and operate IAM workloads on Kubernetes (GKE), cloud-native and serverless environments. - Manage containerization (Docker), Helm/Kustomize configurations, secrets, and certificates. - Embed DevSecOps practices: vulnerability scanning, secrets detection, code quality gates. - Ensure compliance with Zero Trust principles and enterprise security standards. - Collaborate with architects, security teams, and application owners on integrations and releases. - Maintain technical documentation for pipelines, IaC, and operational procedures.
DevOps/ Cloud Platform Engineer
Antal
⚲ Kraków
20 160 - 25 200 PLN (B2B)
Wymagania
- DevOps
- CI/CD
- AWS
- Azure
- GCP
- IaC
- Terraform
- IAM
- Web
- Communication skills
O projekcie: DevOps / Cloud Platform Engineer Praca hybrydowa - 2 dni w tygodniu z biura w Krakowie Współpraca B2B Do naszego Klienta poszukujemy osoby, która wesprze utrzymanie i rozwój wewnętrznych platform chmurowych oraz procesów wdrożeniowych dla zespołów projektowych. Jeśli dobrze czujesz się na styku chmury, automatyzacji i współpracy z zespołami aplikacyjnymi, ta rola jest dla Ciebie. Oferujemy - Pracę w środowisku nowoczesnych technologii chmurowych - Współpracę z międzynarodowymi zespołami projektowymi - Realny wpływ na rozwój i usprawnianie platform DevOpsowych - Możliwość rozwoju w kierunku Cloud / DevOps / SRE - Stabilne środowisko pracy z jasno zdefiniowanymi standardami i procesami Wymagania: - Bardzo dobra znajomość zasad DevOps oraz kultury pracy w środowisku DevOps - Praktyczne doświadczenie z CI/CD oraz popularnymi narzędziami wspierającymi automatyzację wdrożeń - Znajomość co najmniej jednej z chmur publicznych: AWS, Azure lub GCP - Doświadczenie w pracy z Infrastructure as Code (np. Terraform) - Rozumienie zagadnień związanych z bezpieczeństwem i infrastrukturą: IAM, sieci, monitoring, logowanie - Umiejętność efektywnej współpracy z zespołami aplikacyjnymi oraz zespołami IT na różnych poziomach organizacji - Bardzo dobre umiejętności komunikacyjne – zdolność przekazywania złożonych informacji w jasny i zrozumiały sposób - Proaktywne podejście i nastawienie na ciągłe usprawnianie procesów Mile widziane - Doświadczenie w środowiskach wielochmurowych - Znajomość narzędzi do monitoringu i alertowania - Doświadczenie w pracy w środowisku o wysokich wymaganiach regulacyjnych lub korporacyjnych standardach IT Codzienne zadania: - Wsparcie oraz rozwiązywanie zgłoszeń użytkowników dotyczących wewnętrznych platform działających w środowiskach AWS, Azure i GCP - Współpraca z zespołami projektowymi przy wdrożeniach aplikacji - Diagnozowanie i rozwiązywanie problemów konfiguracyjnych w projektach klientów z wykorzystaniem narzędzi takich jak Terraform, Jenkins i inne narzędzia CI/CD - Dbanie o zgodność środowisk z obowiązującymi standardami operacyjnymi (m.in. IAM, konfiguracje sieci VPC, reguły firewall, monitoring i alertowanie) - Aktualizacja dokumentacji operacyjnej i wsparciowej - Przestrzeganie i wspieranie dobrych praktyk Change Management zgodnie ze standardami organizacji - Udział w dyżurach on-call w modelu rotacyjnym w ramach wsparcia 24/7
DevOps Engineer/Java Developer
Link Group
⚲ Remote
18 480 - 23 520 PLN (B2B)
Wymagania
- DevOps
- Cloud
- Azure
- Virtual Machines
- Networking
- Storage
- RBAC
- Linux
- Ubuntu
- Red Hat
- Disaster Recovery
- CI/CD Pipelines
- Jenkins
- Nexus
- IaC
- Bash
- Ansible
- Terraform
- Docker
- Kubernetes
- Communication skills
- Functional programming (nice to have)
- API (nice to have)
- SOAP (nice to have)
- REST (nice to have)
- JavaScript (nice to have)
- HTML5 (nice to have)
- CSS3 (nice to have)
- Data base (nice to have)
- SQL (nice to have)
- Angular (nice to have)
- Clean Code (nice to have)
- Unit testing (nice to have)
Wymagania: - 3–5+ years of professional experience as a DevOps, Cloud, or Infrastructure Engineer. - Hands-on experience with Microsoft Azure, including Virtual Machines, networking, storage, and RBAC. - Solid Linux administration skills (Ubuntu, Red Hat). - Experience with backup, restore, and Disaster Recovery (DR) solutions. - Understanding of data encryption concepts (encryption at rest, key and certificate management). - Practical experience with Jenkins and building CI/CD pipelines. - Ability to analyze logs and troubleshoot issues in CI/CD and production environments. - Experience with artifact repositories such as Nexus or similar tools. - Experience with automation and Infrastructure as Code using Bash, Ansible, and Terraform. - Working knowledge of Docker and Kubernetes, with understanding of basic system and application architecture. - Bachelor’s degree in Engineering, IT, Science, or a related technical field. - Strong communication skills. - Proficiency in Polish and English sufficient for project work. - Experience working on international projects. - Experience working in Agile/Scrum environments. NICE TO HAVE - Commercial experience in developing web-based applications using Java and frontend technologies. - Proficiency in object-oriented programming for web applications (functional programming is a plus). - Experience working with web services / APIs (SOAP, REST). - Solid knowledge of JavaScript, HTML5, CSS3, and responsive single-page application design. - Knowledge of databases and SQL. - Experience with Angular. - Understanding of Clean Code principles and unit testing. - Experience with source code management, Continuous Integration, Continuous Delivery, and automated deployments. - Demonstrable experience with Java and commonly used Java frameworks.
DevOps Engineer - T Cloud Public - SOLO
Deutsche Telekom IT Solutions Hungary
⚲ Budapest, Debrecen, Szeged, Pécs
7 404 - 13 066 PLN (PERMANENT)
Wymagania
- Cloud
- Linux
- Confident network skills
- Infrastructure automation
- Completed studies
- Software defined networks
- Routing
- Firewall
- Networks
- Scrum
- Kanban
- Open source projects (nice to have)
- OpenStack (nice to have)
O projekcie: Company Description Ranked as Hungary’s most attractive employer in 2025 (according to Randstad’s representative survey), Deutsche Telekom IT Solutions is a subsidiary of the Deutsche Telekom Group. The company provides a wide portfolio of IT and telecommunications services with more than 5300 employees. We have hundreds of large customers, corporations in Germany and in other European countries. DT-ITS recieved the Best in Educational Cooperation award from HIPA in 2019, acknowledged as the the Most Ethical Multinational Company in 2019. The company continuously develops its four sites in Budapest, Debrecen, Pécs and Szeged and is looking for skilled IT professionals to join its team. Job Description Your Department Step into the engine room of Europe’s public cloud! At Open Telekom Cloud, we don’t just operate a platform — we shape the future of secure, scalable, open-source cloud technology. Powered by agile ways of working, lean structures, and a passionate, high-performance team, we thrive in one of the most dynamic environments in the industry. If you’re a cloud enthusiast and a hands-on professional ready to tackle complex challenges, you’ll feel right at home with us. Why you will love working here? - Advance your career on a major European cloud platform — gain hands-on experience with cutting-edge, hyper-scale cloud technologies that will strengthen your skills and make you a sought-after cloud expert. - Turn ideas into action with a vibrant, international team — collaborate with colleagues who thrive on creativity, excellence, and delivering customer satisfaction. * Please be informed that our remote working possibility is only available within Hungary due to European taxation regulation. Wymagania: Your superpowers: - 2–5 years of hands-on IT experience with modern cloud technologies — you’ve been in the trenches and know how to make a cloud run smoothly. - Strong Linux and confident network skills — you’re comfortable managing servers, services, and network infrastructure. - Experience in infrastructure automation (Ansible, Terraform, or similar) — you automate everything you can to keep operations efficient. - Fluency in English, written and spoken — you communicate clearly across international teams and documentation. - Completed studies in a technical, engineering or scientific subject or comparable professional training. - Basic knowledge of software defined networks, routing, firewalls and physical networks. - Experience in infrastructure automation. - OpenStack and programming experience in open source projects is a plus. - Knowledge of agile development processes (Scrum, Kanban). - High level of customer focus. - Ability to assess technical solutions and come up with creative approaches. Personality: Motivated Codzienne zadania: - Take on big operational challenges in our hyperscale cloud backend - owning everything from hands-on change implementation to deep problem investigation and proactive fixes that kill repetitive tasks before they spread. - Jump straight from insights to action by working directly at the console and getting up close with the hardware and tech that power our platform. - Turn your operational know-how into real impact by joining project rollouts and helping launch new features that evolve our cloud. - Automate wherever possible with modern frameworks, because smart workflows beat manual work every time. - Thrive in a high-trust expert team that shares openly, supports each other, and solves big problems together.
DevOps Engineer (Senior)
Spyrosoft
⚲ Remote
18 480 - 33 600 PLN (B2B)
Wymagania
- AWS
- AWS Lambda
- API
- Gateway
- AWS DynamoDB
- Redshift
- Amazon RDS
- AWS S3
- SNS
- Amazon SQS
- AWS EC2
- ECS
- AWS ECS
- WAF
- VPC
- Route53
- IAM
- Datadog
- Cloud
- Security
- Azure
- AWS CDK
- CD pipelines
- GitLab
- TypeScript (nice to have)
O projekcie: You will join a substantial project as a key infrastructure specialist. You won't be managing a ticket queue; instead, you will partner directly with a mid-sized team of developers (~15 people) to ensure system stability and scalability. We are looking for someone who acts independently and is ready to ensure production reliability and secure cloud network configuration. Our Tech Stack You are not expected to know everything upfront, but this is the environment you will work with: - IaC: Terraform, AWS CDK (TypeScript) - Clouds: AWS & Azure - Observability: Datadog, Prometheus, Grafana, Loki - Core Azure: Kubernetes (AKS), Blob Storage, PostgreSQL - Core AWS: Lambda, API Gateway, DynamoDB, S3, SNS, SQS, EC2, WAF, VPC - CI/CD: GitLab CI, Azure DevOps - Other: RabbitMQ, InfluxDB, Renovate Wymagania: - Production Experience: proven track record of configuring and maintaining production-grade environments. - Hands-on experience with AWS (Lambda, API Gateway, DynamoDB, Redshift, RDS, S3, SNS, SQS, EC2, ECS, WAF, VPC, Route53, ALB/NLB, Cognito, IaM) - Observability & monitoring: hands‑on experience with Datadog for monitoring, logging, alerting and performance analysis in production environments. - Cloud Networking: strong practical experience in configuring Cloud Networks (VPC/VNet, Subnetting, Routing, Peering, NAT Gateways, Security Groups/Firewalls). - Cloud Expertise: in-depth knowledge and commercial experience with Azure and AWS. - Tooling: proficiency with Terraform or AWS CDK. - Practical experience in building and maintaining CI/CD pipelines using GitLab CI/CD, supporting automated deployments and infrastructure changes. - High autonomy and ability to communicate technical concepts to a cross-functional team. Nice to have: - Experience with TypeScript, especially in AWS CDK or serverless applications. Codzienne zadania: - Production Stability: Maintain high availability and security of production environments. - Cloud Networking: Configure and manage VPCs/VNets, subnets, routing tables, peering, and network isolation. - Infrastructure as Code: Provision and manage resources using Terraform or AWS CDK. - Developer Support: Optimize CI/CD pipelines and assist developers in understanding infrastructure constraints. - Observability: Maintain monitoring stacks to ensure full system visibility.
DevOps - wolontariat, projket open source, przeciwdziałaj depresji u dzieci i młodzieży
NeuroN Foundation
⚲ Remote
? - ? PLN (INTERN)
Wymagania
- AWS
- Terraform
O projekcie: Dołącz do innowacyjnego startupu społecznego EDTECH realizowanego przez fundację! Zmieniamy przyszłość edukacji, wyrównujemy szanse młodzieży i przeciwdziałamy depresji. Jesteśmy społecznością zbudowaną z ramienia fundacji NeuroN Foundation i po godzinach tworzymy platformę, która pozwoli dzieciom i młodzieży realizować projekty społeczne, naukowe i komercyjne. Za te działania będą oni mogli zdobywać stypendia i mikrogranty. Nasz projekt cieszy się wsparciem Google, Amazon, Atlassian i innych globalnych firm. Działamy zdalnie, po godzinach, łącząc siły ekspertów z różnych branż. To również idealna okazja, by zdobyć doświadczenie w startupie, rozwijać się i korzystać z topowych narzędzi używanych przez Teslę, Harvard, Netflix, IBM, Uber itd. Wiele dużych firm wspiera nas swoimi narzędziami. Nie jesteśmy nastawieni na zysk, a projekt realizujemy całkowicie w ramach wolontariatu – członkowie zarządu również. Jako fundacja od 2018 roku wspieramy rozwój kompetencji miękkich i przeciwdziałamy depresji wśród dzieci i nastolatków. Chcemy, stworzyć pierwszy taki w całości polski projekt, który będzie dobrą alternatywą dla social mediów, które w obecnej formie przyczyniają się do demencji cyfrowej, problemów z nauką i alienacji. Razem możemy zmienić edukację i dać młodzieży nowe możliwości! Pokażmy, że Polacy potrafią! Wymagania: - 1 rok doświadczenia jako Devops - AWS - Terraform - Ukończone 18 lat - Komunikatywność Codzienne zadania: - Automatyzacja procesów - Zarządzanie infrastrukturą - Zarządzanie środowiskami chmurowymi - Monitoring i reagowanie na incydenty - Współpraca z zespołami developerskimi
Staff / Principal Cloud Platform Engineer
Flyr
⚲ Amsterdam
Wymagania
- Kubernetes
- GCP
- AWS
- Terraform
- ArgoCD
Our Vision We believe in a world where travel companies can innovate freely, growing and accelerating their business, while delivering the experience travelers want and the change the industry needs. FLYR is a technology company that unlocks freedom to innovate for the travel industry – eliminating legacy constraints to enable real-time decision making and create the experiences travelers seek. With FLYR, businesses are able to quickly unlock modern retailing capabilities, improve revenue performance and modernize the e-commerce experience. Flight Itinerary (About The Role) We're looking for an exceptional and highly experienced hands-on Principal or Staff Platform Engineer to join our team. In this pivotal role, you will be the technical leader and force multiplier for our platform engineering initiatives, driving the architecture, strategy, and execution of our next-generation cloud infrastructure and developer experience. If you are an expert in cloud engineering, Kubernetes, modern CI/CD, and love tackling large-scale complexity, we want to talk to you. What Your Journey Will Look Like (Responsibilities) - Architectural Leadership: Tackle the most complex technical problems across the platform and drive scalable, resilient, and cost-effective architectural solutions. - Technical Strategy: Proactively identify technical gaps across various platform domains (e.g., compute, networking, data, security, observability, devex) and propose high-impact optimizations and strategic roadmaps. - Initiative Ownership: Drive large, complex technical initiatives end-to-end, from concept and design to implementation and rollout, ensuring high-quality and timely delivery. - Cross-Functional Alignment: Maintain strong cross-functional communication and alignment with product, security, and other engineering teams to ensure our platform meets broad organizational needs. - Developer Experience (DevEx): Design and implement best-in-class CI/CD pipelines with a relentless focus on improving the DevEx and accelerating feature delivery. - Mentorship & Growth: Accelerate the team's growth by providing expert mentorship, technical guidance, and upskilling opportunities for mid-level and senior engineers. - Knowledge Sharing & Change Management: Conduct regular knowledge sharing sessions across the wider engineering organization. Drive technical change management to ensure successful adoption of new platform patterns and practices.
Senior DevOps Engineer (Azure)
N-iX
⚲ Kraków, Wrocław, Warsaw
19 997 - 25 552 PLN (B2B)
Wymagania
- Cloud platform
- Infrastructure as Code
- Docker
- Kubernetes
- AKS
- CD pipelines
- f.lux
- CD
- Grafana
- Prometheus
- Linux
- Cloud security
- IAM
- Relational database
- MongoDB
- Azure
- SQL
- SaaS
- Microservices architecture
- GitLab (nice to have)
O projekcie: Our Client was established with the mission to fundamentally transform the execution of capital projects and operations. Designed by industry experts for industry experts, our Client’s platform empowers users to digitally search, visualize, navigate, and collaborate on assets. Drawing on 30 years of software expertise and 180 years of industrial legacy as part of the renowned Scandinavian business group, Client plays an active role in advancing the global energy transition. The company operates from Norway, the UK, and the U.S. About Team: Data Next is a cross-functional platform team responsible for enabling data engineers to ingest and store customer data into SaaS product, making it available to stream-aligned (feature) teams. We are actively building the next generation of Client’s platform in collaboration with 16 other teams. Our contribution focuses on the data foundation that underpins the entire product experience. Our mission is to enable fast and reliable onboarding, storage and access to customer data for our Client. Wymagania: - Strong experience with cloud platforms (Azure preferred) and cloud-native architectures. - Hands-on experience with Infrastructure as Code, ideally Terraform. - Experience with containerization technologies (e.g., Docker) and orchestration platforms (e.g., Kubernetes/AKS). - Experience designing and maintaining CI/CD pipelines and Git-based workflows, preferably GitLab. - Familiarity with GitOps practices and tools (e.g., Flux CD). - Experience implementing monitoring, logging, and observability solutions (e.g., Grafana, Prometheus). - Strong knowledge of Linux systems and cloud networking concepts. - Experience with cloud security practices, including IAM, secrets management, and encryption. - Ideally experience working with document and relational databases in cloud environments (e.g., MongoDB Atlas, Azure SQL), including backup, scaling, and performance considerations. - Ideally experience supporting multi-tenant SaaS architectures, including environment isolation, tenant provisioning, and scalability considerations. - Ideally familiarity with microservices architectures and event-driven systems in a SaaS environment. Codzienne zadania: - Ability to analyze complex infrastructure and operational challenges and develop effective solutions. - Strong troubleshooting skills across cloud, containers, CI/CD environments, and database infrastructure. - Strong attention to detail and reliability. - Ability to work effectively in a collaborative, cross-functional team environment. - Strong communication skills, both written and verbal. - Ability to coach and support developers in adopting DevOps best practices.
Senior DevOps Engineer (Azure | ArgoCD | Docker | Helm)
1dea
⚲ Warszawa, Kraków, Wrocław, Gdańsk
25 200 - 27 720 PLN (B2B)
Wymagania
- Azure DevOps
- ArgoCD
- Bash
- PowerShell
- Git
- Mobile DevOps
- iOS
- Android
- JFrog Artifactory
- Azure Artifacts
- QA
- Dev
- PKI (nice to have)
O projekcie: Poszukujemy doświadczonego DevOps Engineera ze specjalizacją w ekosystemie Azure oraz infrastrukturze dla aplikacji mobilnych i desktopowych. Wybrana osoba dołączy do zespołu odpowiedzialnego za budowę bezpiecznych, skalowalnych procesów CI/CD w środowisku regulowanym, wspierając zespoły Backend, Frontend oraz QA. Informacje organizacyjne: - Branża: consulting IT - Wakaty: 3 - Lokalizacja: praca w 100% zdalnie - Start: ASAP (max 1msc okresu wypowiedzenia) - Stawka (do ustalenia): 150 - 165 PLN netto + VAT / h - Warunki zaangażowania: B2B (outsourcing przez 1dea), full-time, long-term Oferujemy - Długotrwały kontrakt B2B (od razu podpisujemy umowę na czas nieokreślony / bezterminowo - Klient nastawia się tylko i wyłącznie na długofalową współpracę) - Pracę w 100% zdalnie - Elastyczne godziny pracy (start w zakresie 7:00 - 10:00) - Zostań częścią rozpoznawalnej firmy o silnej pozycji na rynku - Nowoczesny sprzęt i oprogramowanie (zapewnia nasz Klient) - Kultura współpracy: Cenimy pracę zespołową, otwartość, szacunek i wzajemne wsparcie w rozwoju umiejętności. Kreatywność mile widziana: Twoje pomysły i sugestie w realizacji projektu będą uwzględniane i brane pod uwagę :-) Wymagania: - Bardzo dobra znajomość Azure DevOps, Dockera oraz doświadczenie z ArgoCD - Biegłość w Bash lub PowerShell oraz zaawansowana obsługa Git - Mobile DevOps: Praktyczna znajomość Fastlane oraz doświadczenie w budowaniu i podpisywaniu aplikacji iOS/Android - Zarządzanie artefaktami: Doświadczenie z narzędziami typu JFrog Artifactory lub Azure Artifacts - Komunikatywność: Umiejętność współpracy z zespołami QA/Dev oraz dbałość o dokumentację techniczną - Znajomość języka angielskiego na poziomie komunikatywnym (B2+) Codzienne zadania: - Automatyzacja CI/CD: Projektowanie potoków w Azure DevOps (YAML) dla platform mobilnych (iOS/Android) i desktopowych. - Infrastruktura macOS: Utrzymanie build agentów na macOS, w tym zarządzanie Xcode, certyfikatami i profilami. - GitOps & Kubernetes: Zarządzanie wydaniami w środowisku Kubernetes przy użyciu ArgoCD i Helm. - Quality Gates: Implementacja automatycznych testów, linterów oraz skanowania podatności oprogramowania. - Bezpieczeństwo: Zarządzanie sekretami (Key Vault), kluczami podpisywania kodu oraz dostępami RBAC.
Senior Cloud Platform Engineer
Air Space Intelligence
⚲ Gdańsk
35 100 - 46 800 PLN (B2B)
Wymagania
- Kubernetes
- Docker
- Python
- K8s
- GCP
- CI
- CD pipelines
- Google Cloud
- PostgreSQL
- AWS
- Python (nice to have)
- Any OO language (nice to have)
O projekcie: What we offer: - Flexible Working Hours: With a global team supporting mission-critical operations, we have to be at our best, so we’ve adopted flexible hours to allow for balance. - Premium Healthcare and Health Insurance: Health and wellness are critical to living a happy and resilient life. We provide first-class medical, dental and vision coverage to you and your dependents. - Competitive Salary & Equity: Our team is our biggest asset. We value the hard work that each person commits to us, so we provide competitive, transparent compensation packages. - Generous relocation package to assure smooth relocation to Tricity area. - High Energy Environment: We live by the mantra that now is better than never. You will find yourself surrounded by peers who constantly challenge the status quo. - Flexible Time Off: We encourage you to take time off as you need it. While our team is hard-working, our success is measured by output—not time spent. - Office, Equipment & Tools: We bring the best tools to the mission, from ergonomic desk setups to modern productivity software. We have a freshly prepared team breakfast and lunch every day. How do we hire: We look at the interview process not as screening test but rather as an opportunity to simulate what it would look like working together. We build the interview process around you. Wymagania: What we value: - Ability to maintain and grow our infrastructure stack (K8s, AWS, Docker, Terraform, Helm) - Deep understanding of CI/CD pipelines (CircleCi) - Advanced knowledge of Kubernetes, Docker, and an OO Language along with AWS PostgreSQL, Helm, Terraform, Python 3, Grafana - Ability to collaborate with others across multiple teams - Flexibility to adjust to changing business priorities Codzienne zadania: - As a Cloud Platform Engineer, you will be part of the team scaling our infrastructure to handle 10 times the load of mission critical flight operations over the next 12 months. You’ll spend your time coding software solutions alongside our product engineers, adding new features that benefit our airline customers (Python), and building and monitoring dashboards for critical alerts (Grafana, Prometheus, Graphite).
Developer (Azure)
Reply Polska
⚲ Katowice
12 600 - 22 680 PLN (B2B)
Wymagania
- Azure
- DevOps
- Java
- Docker
- Kubernetes
O projekcie: Benefits - Motivizer Benefits Platform to choose and manage all your benefits in one place. You receive a budget (550 PLN monthly). You can choose medical care package, meal tickets, sports cards (we have Multisport and on preferential terms, we have membership cards to one of the most popular Gyms), cinema tickets, shop vouchers, discounts and many more. - Language Courses – you'll have access to a multi-language learning platform enabling you to practice you language skills and learn new ones! - Regular and systematic further training opportunities - both internally and from external providers. We support your ongoing learning and development. - Cooperation within an internal community is our everyday reality. We have networking events, coding challenges, and company parties for different occasions. About Fincon Reply Fincon Reply is a business and IT consultancy specializing in the financial services industry. Fincon Reply proactively advises banks, the Sparkassen Finance Group, insurance companies and near-financial companies as well as their suppliers on their digital transformation. The company provides on-site support with specialised teams of consultants and developers and delivers turnkey solutions. Wymagania: - Very good communication skills in English and Polish. - Communication skills in German is a plus. - Successful completion of higher education as a qualified IT specialist or university degree in computer science. - At least 2 years of experience as a Software Developer. Ideal skillset: - Experience with Java programming - Experience working with Azure Cloud - Expertise in containerization and orchestration technologies (Docker, Kubernetes) - Expertise in GitLab and Azure DevOps - Experience working in an agile environment. - Experience working in a self-organized team. - Good familiarity with Jira and Confluence. - Availability to work in a hybrid mode or remotely with at least 1 visit in the office per month. Codzienne zadania: - You are a member of a team for dedicated customer in the German financial industry. - You collaborate with development teams to migrate of on-premise environment to Azure cloud. - You design and implement CI/CD pipelines using GitLab and Azure DevOps. - You manage and maintain containerized applications using Docker and Kubernetes. - You develop Java applications that are hosted on Azure, using Azure Container Apps or Azure Kubernetes Service (AKS) - You contribute to the overall cloud migration strategy. - You are responsible for the product you build. - You proactively bring in ideas to create innovative solutions. - You work in an agile environment, using scrum. - You meet with your team in the office (Katowice) at least once a month to plan your work and discuss progress. - You take responsibility in a self-organized team.
DevOps Engineer / Java Developer (m/k/d)
UPVANTA SPÓŁKA Z OGRANICZONĄ ODPOWIEDZIALNOŚCIĄ
⚲ Wrocław
Wymagania
- Microsoft Azure
- Ubuntu
- RedHat
- Bash
- Ansible
- Terraform
- Docker
- Kubernetes
- JavaScript (nice to have)
- HTML (nice to have)
- CSS (nice to have)
- SQL (nice to have)
- Angular (nice to have)
- Java (nice to have)
- Bardzo dobre umiejętności komunikacyjne.
- Doświadczenie z Angular. (nice to have)
Zakres obowiązków: - Zarządzanie i utrzymanie środowisk w Microsoft Azure (VM, networking, storage, RBAC). - Administracja systemami Linux (Ubuntu, RedHat). - Projektowanie i utrzymanie procesów CI/CD z wykorzystaniem Jenkins. - Automatyzacja infrastruktury (Bash, Ansible, Terraform – IaC). - Zarządzanie repozytoriami artefaktów (np. Nexus). - Wdrażanie i utrzymanie konteneryzacji (Docker) oraz orkiestracji (Kubernetes). - Monitorowanie systemów, analiza logów i troubleshooting środowisk testowych oraz produkcyjnych. - Implementacja i utrzymanie mechanizmów backup, restore oraz disaster recovery (DR). - Dbałość o bezpieczeństwo danych (szyfrowanie danych at-rest, zarządzanie kluczami i certyfikatami). - Współpraca z zespołem developerskim w środowisku Agile/Scrum. - Udział w projektach międzynarodowych.
DevOps Engineer / Java Developer (m/k/d)
Upvanta
⚲ Remote
21 000 - 26 040 PLN (B2B)
Wymagania
- DevOps
- Cloud
- Microsoft Azure
- Linux
- Ubuntu
- Jenkins
- CI CD
- Bash
- Ansible
- Terraform
- Infrastructure as Code
- Docker
- Kubernetes
- CIS
- CA
- Web services
- SOAP (nice to have)
- REST API (nice to have)
- JavaScript (nice to have)
- HTML5 (nice to have)
- CSS (nice to have)
- SQL (nice to have)
- Angular (nice to have)
- Clean Code (nice to have)
- Unit testing (nice to have)
- Java (nice to have)
O projekcie: Poszukujemy doświadczonego specjalisty na stanowisko DevOps Engineer / Java Developer, który będzie odpowiedzialny za utrzymanie i rozwój środowisk chmurowych, automatyzację procesów CI/CD oraz wsparcie rozwoju aplikacji webowych w technologii Java. Rola łączy kompetencje infrastrukturalne i developerskie, z naciskiem na stabilność, bezpieczeństwo oraz jakość dostarczanego oprogramowania. Wymagania: Wymagania (MUST HAVE) - Min. 3 lata doświadczenia jako DevOps / Cloud / Infrastructure Engineer. - Praktyczne doświadczenie z Microsoft Azure. - Bardzo dobra znajomość administracji systemami Linux (Ubuntu, RedHat). - Doświadczenie w zakresie backupu i rozwiązań DR. - Znajomość zagadnień szyfrowania danych i zarządzania kluczami. - Doświadczenie w pracy z Jenkins oraz budowie pipeline’ów CI/CD. - Umiejętność analizy logów i rozwiązywania problemów w środowiskach CI/CD i produkcyjnych. - Doświadczenie z Nexus Repository lub podobnym narzędziem. - Znajomość Bash, Ansible, Terraform (Infrastructure as Code). - Praktyczna znajomość Docker i Kubernetes oraz podstaw architektury systemów i aplikacji. - Wykształcenie wyższe techniczne (IT, inżynieria, nauki ścisłe lub pokrewne). - Bardzo dobre umiejętności komunikacyjne. - Znajomość języka polskiego i angielskiego umożliwiająca pracę projektową. - Doświadczenie w pracy w metodyce Agile/Scrum. Mile widziane (NICE TO HAVE) - Komercyjne doświadczenie w tworzeniu aplikacji webowych w Java (backend) oraz frontend. - Znajomość OOP w aplikacjach webowych (mile widziane elementy programowania funkcyjnego). - Doświadczenie z web services / API (SOAP, REST). - Dobra znajomość JavaScript, HTML5, CSS3 oraz responsive single-page design. - Znajomość baz danych i SQL. - Doświadczenie z Angular. - Znajomość zasad Clean Code i Unit Testing. - Doświadczenie w SCM, CI/CD oraz automatycznych wdrożeniach. - Doświadczenie z frameworkami wykorzystywanymi w projektach Java. Codzienne zadania: - Zarządzanie i utrzymanie środowisk w Microsoft Azure (VM, networking, storage, RBAC). - Administracja systemami Linux (Ubuntu, RedHat). - Projektowanie i utrzymanie procesów CI/CD z wykorzystaniem Jenkins. - Automatyzacja infrastruktury (Bash, Ansible, Terraform – IaC). - Zarządzanie repozytoriami artefaktów (np. Nexus). - Wdrażanie i utrzymanie konteneryzacji (Docker) oraz orkiestracji (Kubernetes). - Monitorowanie systemów, analiza logów i troubleshooting środowisk testowych oraz produkcyjnych. - Implementacja i utrzymanie mechanizmów backup, restore oraz disaster recovery (DR). - Dbałość o bezpieczeństwo danych (szyfrowanie danych at-rest, zarządzanie kluczami i certyfikatami). - Współpraca z zespołem developerskim w środowisku Agile/Scrum. - Udział w projektach międzynarodowych.
DevOps Engineer
Respect Energy Fuels Sp zo.o.
⚲ Warszawa
15 000 - 26 000 PLN (B2B)
Wymagania
- Python
- SQL
- Azure
- React
- TypeScript
- Linux
- Git
- CI/CD
- Kubernetes
- Docker
O projekcie: Utrzymujemy środowisko produkcyjne dla aplikacji webowych i warstwy danych. Szukamy DevOps Engineera, który lubi automatyzować, dba o bezpieczeństwo i stabilność, a jednocześnie chce pracować blisko zespołu dev i biznesu. Mniej „ops na ticketach”, więcej realnego wpływu na to, jak dowozimy produkt. Wymagania: - Praktyka produkcyjna z Kubernetes i Docker (deploy, troubleshooting, podstawy bezpieczeństwa). - CI/CD (np. GitHub Actions) oraz umiejętność automatyzacji wdrożeń i środowisk. - Azure (lub analogiczna chmura) i rozumienie usług potrzebnych do utrzymania aplikacji (registry, networking, access). - Helm i praca z konfiguracją aplikacji (values, parametry, środowiska). - Linux + Git; podejście SRE: mierzenie, monitorowanie, redukcja MTTR. Codzienne zadania: - Utrzymanie i rozwój środowiska Kubernetes (deployments, zasoby, autoscaling, RBAC) w chmurze Azure. - Budowa i usprawnianie CI/CD (GitHub Actions) oraz procesu release (wersjonowanie, rollback, kontrola zmian). - GitOps: praca z ArgoCD/Helm – deklaratywna konfiguracja, drift detection i powtarzalne wdrożenia. - Obsługa edge/networking: ingress-nginx, certyfikaty TLS (cert-manager), podstawy polityk sieciowych. - Zarządzanie sekretami i dostępami: Vault/secret management, least‑privilege, dobre praktyki bezpieczeństwa. - Observability: monitoring/logging/alerty, runbooki, udział w incident response i RCA. - Współpraca z zespołem dev (backend/data/frontend): diagnoza problemów produkcyjnych i usprawnienia platformy.
DevOps Engineer
Vaimo
⚲ Wrocław
15 000 - 25 000 PLN (B2B)
Wymagania
- Docker
- Kubernetes
- Google Cloud Platform
- Ansible
- Cloud
- Bash script
- Python
- Bash
- SRE
- Git
- Gitflow
- Linux
- Networking
- REST API
- DevOps
- Google cloud (nice to have)
- FinOps (nice to have)
- MySQL Databases (nice to have)
- InfoSec (nice to have)
- Google Cloud (nice to have)
- MySQL (nice to have)
O projekcie: Vaimo is a full-service digital experience agency that helps brands and businesses grow by uniting commerce, content, data, and insights into seamless, engaging, and beautifully designed customer experiences. With 16+ years of experience and offices across the Nordics, EMEA, and North America, we combine strategy, design, and technology to create digital experiences that drive measurable results. Our teams work across B2C, B2B, and D2C — from redefining global lifestyle brands to transforming complex enterprise ecosystems. What we do: - We develop and maintain our in-house PaaS (Platform as a Service) based on Kubernetes - We consult clients and project teams in the cloud infrastructure - We monitor and resolve incidents 24/7 for our infrastructure (on-call optional) - We participate in deep architectural discussions to ensure solutions are designed for successful deployment, security, and availability in the Cloud. - Develop and implement internal systems, processes, and best practices to be used by other teams designed to increase productivity. - Collaborate with our Software Engineers - Troubleshoot Cloud issues and respond to escalations. - Verify and resolve configuration and other non-software-related issues. What We Offer? - A team with a lot of flexibility, initiative, and opportunities to experiment with technologies - A friendly environment within the team and in the organization. - Vaimo’s strong culture of openness, teamwork, excellence, and having fun striving towards our goals. Wymagania: Skills & Requirements: All below skills are relevant to our work, but we are interested in your individual mix of these competencies: - At least 4+ years designing and implementing Google Cloud Platform-based solutions. - At least 3+ years experience with Docker and 2+ years experience with Kubernetes. - At least 3+ years experience with Infrastructure as Code way of managing infrastructure resources and 3+ years of experience with automation (eg: ansible, custom bash, and Python scripts). - Know-how how software engineers are working and how to use its best practices in the SRE team, eg: git, gitflow, gitops. - An in-depth knowledge of Linux troubleshooting, including networking, file systems, security, and the kernel. - Must be a team player, with exceptional communication skills, working well with others in the group and the rest of the engineering organization - Familiarity with Cloud security and governance models. You will get extra credits for: - Experience with Google Cloud - Experience with FinOps culture - Experience with InfoSec and its practices - Experience with MySQL Databases Codzienne zadania: - Proactively work to create scalable and highly reliable Cloud - Optimize performance and running cost of our environments by having best practice configurations in place - Operate and maintain our New Relic, Prometheus and Loki-based observability stack - Develop automation using Argo Workflows and Events - Troubleshoot, identify bugs and respond to incidents
Cloud DevOps
Deutsche Telekom IT Solutions Hungary
⚲ Budapest, Debrecen, Pécs, Szeged
15 680 - 23 846 PLN (PERMANENT)
Wymagania
- Degree
- Cloud
- Docker
- Kubernetes
- Terraform
- Jenkins
- Communication skills
- DevOps
- Infrastructure Engineering
- Agile (nice to have)
- GitLab (nice to have)
- GitOps (nice to have)
- Monitoring tools (nice to have)
- Jira (nice to have)
- Confluence (nice to have)
- Helm (nice to have)
- CI/CD (nice to have)
O projekcie: As Hungary’s most attractive employer in 2025 (according to Randstad’s representative survey), Deutsche Telekom IT Solutions is a subsidiary of the Deutsche Telekom Group. The company provides a wide portfolio of IT and telecommunications services with more than 5300 employees. We have hundreds of large customers, corporations in Germany and in other European countries. DT-ITS received the Best in Educational Cooperation award from HIPA in 2019, acknowledged as the the Most Ethical Multinational Company in 2019. The company continuously develops its four sites in Budapest, Debrecen, Pécs and Szeged and is looking for skilled IT professionals to join its team. Department This position is filled in the IoT Technology team of Deutsche Telekom IT Solutions Hungary. The team consists of IT professionals working on big data, data analytics, data sharing and IoT solution implementation projects. Project Energy Cloud is a comprehensive IT solution aimed at achieving energy-saving goals. This approach is defined based on the methodology of the Science Based Target Initiative, according to Scope 1 (direct emissions from the company's operations) and Scope 2 (indirect emissions from electricity usage). This means that Deutsche Telekom (currently operated by GSUS in Germany for office buildings, in cooperation with Telekom Natcos) is reducing its emissions in line with the goals of the Paris Climate Agreement. This method is used to develop, plan, and implement scientifically grounded, measurable, and transparent emission reduction strategies. What’s special about the project is that we must be able to receive data both from external networks and from Telekom’s private network. There is an external machine learning team, an IoT device control team, and a few other teams that are developing in parallel with us and will eventually migrate to our platform. We choose Google Cloud Managed solutions for everything. Wymagania: Qualifications - Bachelor or Master’s degree in the following areas: Computer science, Engineering or Information Technology or equivalent work experience in IT - 3+ years of DevOps or Infrastructure Engineering experience - Strong communication skills and excellent written and verbal communication competences in English language. - Eagerness to continually learn, grow and improve in a collaborative team environment - Experience maintaining and deploying to Docker/Kubernetes - Experience with Cloud (AWS, Azure, GCP or anything equivalent) is a must - Experience with Terraform - Experience with Jenkins Advantage: - Knowledge of Security Infrastructure Technology - Good understanding of networks esp. in cloud environments - Knowledge in agile - Experience with Gitlab CI/CD (gitlab runners), github actions - Experience with GitOps Tools (e.g. ArgoCD, FluxCD) - Experience with Monitoring in cloud environments - Experience with Jira, Confluence - Experience with Helm Codzienne zadania: - Develop, Document, Integrate and Deploy Infrastructure-As-Code into Multiple and Staged Cloud Runtime Environments - Provide abstract interfaces for an overarching automation backend - Coordinate security measures in various layers of a resource management product - Monitor and Ensure Code Security, Code Quality and Runtime Quality using Standard Tools and Rulesets - Work Agile in a high-skilled and cross-functional international team (Germany, Hungary, Spain).
Azure Cloud Engineer
OChK
⚲ Warszawa
14 000 - 17 000 PLN (PERMANENT)
Wymagania
- DevOps
- Kubernetes
- Azure
- IaaS
- PaaS
- Infrastructure as Code
- Terraform
- Ansible
- Jenkins
- Bash
- PowerShell
- Python
- Google Cloud (nice to have)
- AWS (nice to have)
- GitHub Actions (nice to have)
- SRE (nice to have)
- Open source (nice to have)
- OpenStack (nice to have)
- VMware (nice to have)
- Hyper-V (nice to have)
- KVM (nice to have)
- AI (nice to have)
O projekcie: OChK to wiodący dostawca usług chmurowych w Polsce, z ugruntowanym doświadczeniem w rozwiązaniach z obszaru zaawansowanej analityki danych, sztucznej inteligencji i cyberbezpieczeństwa. Bogate portfolio usług dodanych konsekwentnie rozwija zarówno na autorskiej Platformie OChK, jak i na platformach chmurowych strategicznych partnerów: Google i Microsoft. Eksperci OChK tworzą skalowalne i bezpieczne środowiska, zgodnie z najlepszymi praktykami i standardami rynkowymi, wspierając cele biznesowe klientów na każdym etapie wdrażania chmury. Świadczą kompleksowe usługi: od strategii i migracji, przez zarządzanie środowiskiem IT i utrzymanie, budowę aplikacji i rozwiązań data/AI, po optymalizację, automatyzację, bezpieczeństwo i wsparcie w zakresie zgodności wdrażanych rozwiązań z regulacjami. Z usług OChK korzysta blisko 400 organizacji z niemal wszystkich sektorów, w tym instytucje publiczne, firmy prywatne, korporacje, podmioty nadzorowane i startupy. W OChK: - cenimy proaktywność i inicjatywę własną, dlatego wspieramy autonomię w podejmowaniu decyzji, - budujemy kulturę organizacyjną na wartościach takich jak profesjonalizm, współodpowiedzialność i wzajemny szacunek, - pracujemy zadaniowo w trybie hybrydowym lub zdalnym, - przykładamy dużą wagę do efektywnego onboardingu, podczas którego w luźnej atmosferze i przy pełnym wsparciu Twojego CloudBuddiego poznajesz zespół, firmę i swoje obowiązki, - inwestujemy w Twój rozwój poprzez finansowanie szkoleń i certów, - od pierwszego dnia pracy udostępniamy Ci platformy edukacyjne Google i Microsoft, - pracujemy w zwinnym środowisku pracy, z wykorzystaniem aplikacji zwiększających efektywność, takich jak Google Workspace, Slack, GitHub, Jira, - oferujemy prywatną opiekę medyczną, - umożliwiamy ci przystąpienie do ubezpieczenia grupowego na preferencyjnych warunkach, - pokrywamy koszt karty Multisport, - organizujemy i współfinansujemy naukę języka angielskiego, - lubimy się integrować podczas różnorodnych inicjatyw - firmowych i oddolnych, które pomagają nam się lepiej poznać i utrzymać dobrą atmosferę współpracy. Wymagania: Nasze wymagania: - ponad 2 lata doświadczenia w zakresie budowy i administracji rozwiązań technologicznych klasy enterprise, - dobra znajomość popularnych systemów operacyjnych, - dobra znajomość technologii konteneryzacji Kubernetes, - dobra znajomość platformy Azure w zakresie zarządzania siecią i infrastrukturą oraz jej kluczowych usług: IaaS, CaaS, PaaS, FaaS, - podstawowa znajomość w zakresie tworzenia infrastruktury w modelu Infrastructure as Code (np. Terraform, Ansible, Jenkins) oraz sprawne posługiwanie się językami skryptowymi (np. Bash, PowerShell, Python), - praktyczna znajomość Azure DevOps/GitHub Actions lub równoważnego rozwiązania. Mile widziane - znajomość innych platform chmury publicznej (Google Cloud, AWS), - certyfikaty Microsoft z zakresu MS Azure lub M365, - doświadczenie pracy w zespołach SRE, - uczestnictwo w projektach Open Source, - widoczność w community technologicznym (uczestnictwo w meetupach, konferencjach, prowadzenie bloga). - dobra znajomość technologii wirtualizacji (np. OpenStack, VMWare, Hyper-V, KVM, VirtualBox), - znajomość Microsoft Foundry (dawniej Azure AI Foundry), - doświadczenie we wdrożeniach lub zarządzaniem rozwiązaniami z rodziny M365. Codzienne zadania: - uczestniczenie w projektowaniu, budowaniu i wdrażaniu architektury dla rozwiązań Cloud Native oraz modernizacji i migracji rozwiązań on-premise do chmury publicznej, - uczestniczenie w projektach dla dużych klientów z różnych sektorów gospodarki i/lub instytucji publicznych, - praca w małych, samoorganizujących się zespołach, w roli inżyniera/inżynierki, pod “skrzydłami” Architekta/Tech Leada, w których będziesz odpowiadać za powierzoną Ci część rozwiązania i sposób jego wytworzenia. - będziesz mieć możliwość ciągłego rozwijania swoich kompetencji w zakresie technologii chmurowych oraz wymieniania się wiedzą i doświadczeniem z naszymi klientami czy pozostałymi osobami z zespołu, - będziesz mógł/mogła uczestniczyć w konferencjach, meetupach oraz innych eventach, by budować i rozwijać relację z naszymi klientami i partnerami, w szczególności z zespołem Microsoft.
Assistant Director – Data and Analytics
HSBC Technology Poland
⚲ Kraków
15 775 - 22 000 PLN (PERMANENT)
Wymagania
- Data visualization
- Qlik Sense
- Use cases
- Data science
- Degree
- PhD
- Python
- SQL
- Cloud
- Analytics platform
- GCP
- Azure
- AWS
- Big data
- Hadoop
- Spark
O projekcie: Some careers shine brighter than others. If you’re looking for a career that will help you stand out, join HSBC, and fulfil your potential. Whether you want a career that could take you to the top, or simply take you in an exciting new direction, HSBC offers opportunities, support and rewards that will take you further. Your career opportunity This function is part of the CIB D&A COO function and aims at bringing together existing CMB and GB D&A functions into a single CIB D&A Data & Analytics function. As part of this significant programme of change, the function is expected to re-design and rationalize the Analytics, Technology, Architecture and Data. We are seeking an Analyst – Business Consulting to join the Data Science chapter in our team. The person will play a key part in using the power of data to solve problems. What we offer - Competitive salary - Annual performance-based bonus - Additional bonuses for recognition awards - Multisport card - Private medical care - Life insurance - One-time reimbursement of home office set-up (up to 800 PLN). - Corporate parties & events - CSR initiatives - Nursery discounts - Language classes - Financial support with trainings and education - Social fund - Flexible working hours - Free parking Wymagania: What you need to have to succeed in this role - Good to have knowledge of Banking domain use cases with specific applications to data science problems. - 6+ years of relevant experience, preferably in banking and financial sector. - University degree in Computer Science, Finance, Statistics, Mathematics or any other quantitative related discipline, PhD or Master’s degree is highly preferred. - Proficiency in using Python, SQL, or similar analytically oriented programming languages. - In depth knowledge and experience with machine learning techniques and algorithms. Experience with cloud analytics platforms such as GCP, Azure and AWS and big data platforms such as Hadoop, Spark, etc. - Extensive technical skill in data mining, able to understand various data structures both structured and unstructured and common methods in data transformation. - High technical aptitude, problem solving abilities, intellectual curiosity, motivation, and passion for data discovery. - Open minded and self-motivated with good communication and interpersonal skills as well as strong sense of responsibility. - Knowledge and good experience with data visualization tools such as Qlik Sense will be a strong plus. Codzienne zadania: - Extract, transform, and load (ETL) from various sources, ensuring data quality, accuracy, and integrity. Design and optimize data pipelines, ensuring efficient data extraction, storage, and processing. - Extract and integrate data from external sources, such as public databases, social media platforms and third-party vendors to enhance advanced analytics capabilities. - Conduct in-depth analysis of complex transactional and demographic datasets to identify patterns for identifying risk and opportunities. - Develop innovative and impactful data-based tools and models for propensity modeling and risk mitigation, continuously evaluate the effectiveness of models and make recommendations for improvements. - Engage various stakeholders to understand and collect project requirement, present analysis and or results, make recommendations on strategy formulation and solicit project feedback. - Work with and leverage solutions and approaches developed by the wider HSBC Analytics community - Own end to end delivery responsibility of Data Science projects and guide, train and mentor junior team members involved in the project. - Contribute to Data Science chapter initiatives at a CIB D&A level.
Staff Platform Engineer
Ashby
⚲ Lisbon, Barcelona, Berlin, Copenhagen, Stockholm, Amsterdam
From 11 700 EUR (UoP)
Wymagania
- Node.js
- TypeScript
- React
- PostgreSQL
- AWS
Staff Platform Engineer, EU We’re looking for a curious, rigorous, problem-hungry platform engineer (who codes!) to carry the ball as we bring Ashby to the big leagues. Ashby builds software that lets talent teams build an efficient, delightful, respectful hiring process. Similarly, you’re an engineer who wants to build a “paved road” that excellent engineering teams can safely take to the moon and back. We have notable customers like Notion, Linear, Shopify, and Snowflake. Our growth and retention metrics are best-in-class among our peers: we have tens of millions in ARR, growing >100% year over year, over 2500 customers, very low churn, and many years of runway. We’ll share more details once we meet, but you now probably have a good idea as to why we're hiring for this role 😅. We’ve listed this role twice: as a Platform Engineer and Site Reliability Engineer – our team does both, and we are open to candidates who lean towards one or the other. About the role and how we work Hi 👋 I’m Colin, Head of EMEA Engineering. I’ve spent a number of years leading engineering teams in startups, and that has always included being close to infrastructure teams - no matter what name they’ve worn (SRE, infrastructure, platform, etc). I’ve got my hands dirty building the initial infrastructure for startups and know the value a talented infrastructure engineer brings. The rigour, the discipline, the peace and quiet when everything just hums along. Our infrastructure is in a good place for now. Nothing is static. Ashby continues to grow rapidly, putting strain on our existing infrastructure. We’re always looking to give our customers more powerful hiring software, and building new product features often requires new pieces of infrastructure. Having herded plenty of snowflakeservers in the past, I’ve learned there’s a better way. I (and Ashby) place a lot of value on infrastructure-as-code. As a Platform Engineer at Ashby, you’ll get to dive into scaling problems, add new capabilities to our platform, and think about how our entire team interacts with infrastructure. All our own engineers own their projects end-to-end and ship with minimal oversight. We don’t put roadblocks to ensure security when common sense will do and we don’t build processes like change management boards around the lowest common denominator. But with great power comes great responsibility: we handle personal and confidential data about some of the biggest decisions we ever make at work. As we grow, more and bigger customers rely on us to be reliable and secure and how we operate internally will need to evolve. We’re at an inflection point where our ability to scale and deliver a seamless experience has a make-or-break impact – we have some of the fastest growing companies using our platform every day to hire hundreds of people per month. We need someone like you to make good decisions, debug thorny issues, and build us a future-proof platform that can withstand this scale. Our small but mighty infrastructure team has set up a secure and simple environment (we don’t believe in spinning up a new service unless necessary!) for our growing product team to build in. That’s where you come in: you, too, will own projects end-to-end and have an impact on core parts of the Ashby developer and user experience. For instance, you could work on: - Optimize our homegrown ultra-dynamic recruiting DSL-to-SQL compiler, and create tools to help developers do so - Create automated guardrails for the security and privacy of our customer data - Help our developers ship features fast through canary deploys, gradual rollouts and feature flags, while keeping complexity manageable and reducing downtime - Work with the business and the engineering team to define SLOs and implement the corresponding SLIs. - Ensure all communication with external services supports retries and circuit-breakers. - Implement the infrastructure to support an event-driven architecture and data warehouse. We’re looking for someone who can build systems that an engineer would like to work with: mature and boring but open-minded and approachable. We have to balance reliability with flexibility. Software and its availability are now mission critical to almost every working professional. To be in an SRE in today’s world, you have to be extremely comfortable evaluating risk, those you take and those others take. Why you should or shouldn’t apply You should apply if: - You never stop. You get weirdly obsessed about a problem that doesn’t yet make sense, turn it every which way in your head until the explanation dawns. You’ll search every rock, inventory every clue, hunt every mismatch. We do that, too - together we’ll be armed with state-of-the-art monitoring tools and an impressive amount of data, and join you in the adventure. - You don’t take shortcuts. You’re speaking up for the future user, the edge case, the doomsday design. You know product engineers want to build it with you, and see them as allies, where you give them the power and knowledge to access greater things. - You’re someone who cares about what you do and the team you do it with, and want to work with others who do as well. You’ll be on interview panels choosing your next colleagues, and you’ll take that seriously. You only want to work with people who make you better, and want to make you better. - You’ve built infrastructure at a slightly later stage than Ashby is at - you know how to deal with millions of data points, have seen great (or not great) infrastructure make or break customer experience, and have automated everything from provisioning to monitoring and release process. - You’re a Swiss army knife (all nationalities welcome ;) ). You’ll get every hard problem the company faces. You’ll get to do infrastructure updates, security enforcements, database optimization, Kubernetes debugging, and digging through Typescript traces figuring out what doesn’t work. You probably don’t feel like an expert at at least some of that... and that appeals to you. All that makes for a pretty specific kind of role, and the job isn’t to everyone’s tastes! You should not apply if: - You don’t want to make your own decisions on what is the best paved road to build for Ashby, and expect a lead or manager to make the final call on what that is. Our leads (and managers) give ample commentary and feedback on technical decisions and how they’re made, but you ship what you want to build and are accountable for it. - You hate SQL. We have a lot of features built around making the best out of data, and our platform engineers also sometimes dive into a gnarly report or advise engineers on a more performant data model to use. - You don’t want to code. Our SREs are some of our best software engineers and they are just as responsible for the application as the other engineering teams - albeit at a platform level. Reviewing code and submitting code changes will be part of your day to day. - Your primary mode of communicating best practices to engineers is live meetings. We’re a very async culture and written communication (and code) is how changes get made. As an Ashby SRE, you will need to share new tooling and best practices with engineers faster than your next meeting opportunity will take you. - You’ve never delivered a project, on your own, without someone prodding you for updates. We have no project or delivery managers to fill your calendar with busy work, but the flip side is you have to do your project management, seek the help you need to get unstuck and cut scope when it’s worthwhile. Technology Stack I’m sharing our tech stack with the caveat that we don’t require previous experience in it: TypeScript (frontend & backend), Node.js, React, Apollo GraphQL, Postgres, Redis. We use Datadog and Sentry on 100% cloud-based (AWS) infra. We take developer experience and reliability seriously: all engineers are on call in a follow-the-sun model, and everyone contributes to developer tooling. What We’re Building As engineers, we are used to tooling that makes us better at what we do. When we started Ashby, we saw the opposite with Talent Acquisition software. Recruiting teams were leveling up how they did their work, but instead of software meeting this new standard, it held them back. Scheduling a final round is an excellent example. Recruiting teams wanted to schedule candidates faster, track interviewer preparation and quality, and do it with half the headcount. A recruiter needed to manually collect availability from the candidate, identify qualified interviewers, perform “Calendar Tetris” to find who is available to interview the candidate, schedule on the earliest date possible, and make any last-minute adjustments as availability changed. They must do this while considering the interview load on each individual and whether interviewers need to be trained and shadowing others. 🥵 TA software didn’t help. As hiring managers, we know TA is a critical function, and as engineers, we know software can do better. So, we built and continue to build Ashby to give TA teams the highest standard of tooling. Software that’s intelligent and powerful. Software that provides insights into where they’re failing and automates or simplifies many of the tasks they’re underwater with. We want other functions and departments to be jealous of what TA teams can do with Ashby, and today they often are! Engineering Culture Our engineering culture is motivated by Abhik and Benji’s (our co-founders) belief that a small talented team, given the right environment, can build high-quality software fast (and work regular hours!). We do it through: - Minimal process with ownership over decisions normally made by product and design - Natural collaboration and deliberate communication - Investing in tools and abstractions that give us leverage - Putting effort into building a diverse team Minimal Process & Lots of Ownership The best engineers we’ve worked with delivered reliably magical outcomes. They took customer problems and relentlessly drove them to solutions that were not only successful but often brilliant and creative. While they did this with minimal oversight, stakeholders were never in the dark as to what was going on, and no setback was a surprise. Traditional product-development processes aren’t meant for the best engineers. Their purpose is to create consistent outcomes regardless of the engineer’s skill. But, consistency comes at the expense of an engineer’s time and freedom—both ingredients necessary to generate those magical outcomes. As a result, process stifles the best engineers and doesn’t give others the opportunity to practice the behaviors that made the best engineers the “best.” At Ashby, we want to build an environment that encourages every engineer to be their best. So, at Ashby, every Engineer runs their project. Product Managers (and Designers) build strategy, do customer research, and hand off problem briefs to Engineers. Engineers take on the rest: they research the problem, write product specs, build wireframes, and implement their solution end-to-end. We rely on engineers, not process, to push information outward to the relevant folks (e.g., Product Managers) and pull folks in to help (e.g., Designers, Infra). It’s a new level of ownership for many engineers, but we’d rather an engineer fail a bit and coach up their skills than use process as a crutch. Not everyone succeeds in our culture, but those who do thrive. Collaboration is Natural & Communication is Deliberate Our engineering team consists of lifelong learners who are talented but also humble and kind (meet them here!). These attributes create an environment where collaboration happens naturally. We combine this with research, prototyping, and written proposals to see around corners and get feedback from the team across time zones. Focus time is something that we hold sacred, and, with thoughtful and deliberate communication, engineers are in <2h meetings per week (Abhik wrote about it here). To drive it home, here's a recent calendar of an engineer who has been with us for over 4 years: We also meet in person at least twice a year, once as a department and once as a company. You also have a small budget to meet up with folks in your city/region. Increase Leverage, not Team Size We built Ashby with the quality, breadth, and depth that many customers would expect from much larger teams over larger time scales. We’ve done this through investment in: - Great developer tooling. Our CI/CD takes ~10m, and we deploy at least 15x a day. A debugger that works out of the box. Everyone on the team has contributed to our developer experience 💪🏾. - Building blocks to create powerful and customizable products fast. At the core of Ashby is a set of common components (analytics modeling and query language, policy engine, workflow engine, design system) that we constantly improve. Each improvement to a common component cascades throughout our app (short video below). Here’s an impromptu quote from Arjun in our company Slack of what it’s like to build a feature at Ashby: And a demo of one of these building blocks: Put Effort into Diversity Diverse teams drive innovation and better outcomes. Having seen my mother and partner build their careers as minority women in non-diverse fields, I want to make sure Ashby creates opportunities for the next generation of engineers from underrepresented groups. Today, 21% of engineers at Ashby are from underrepresented groups. It’s not great, and we are taking conscious steps to improve, like sourcing diverse candidates, providing generous paid family leave, no leetcode interviews, and more. Interview Process At Ashby, our team and interview process want to help you show your best self. We’ll dive into past projects and simulate working together via pair programming, writing product and tech specs collaboratively, and talking through decisions. There are no leetcode or whiteboard exercises. Our interview process is three rounds: - Introduction call with Hiring Manager (15 to 30m, live) - A technical screen where we pair in our actual codebase (1h, live) - Three non-coding interviews that focus on technical design, debugging incidents, and infrastructure (3h 15m, live can be split across multiple days) Depending on our leadership team’s bandwidth, we may start with an additional 30m screen with a recruiter. Your hiring manager will be your main point of contact and prep you for interviews. Each round will have written guidance so you know what to expect (you’ll need minimal preparation). You’ll meet 4 to 6 people in engineering (with 5-15 minutes in each interview to ask them questions). If we don’t give an offer, we’ll provide feedback! Your First Three Months at Ashby We want an exceptional onboarding experience for every new hire. At Ashby, your dev environment is set up with a single script, you push your first product change on day one, and you spend the rest of your time shipping product changes that give you a tour of our codebase and best practices. The product changes increase in scope and ambiguity from simple copy changes to the delivery of a prominent, impactful feature. Your manager will do a 30, 60, and 90-day review to give feedback and calibrate on how we work together. It’s a team effort to get you successfully onboarded; you’ll have a peer paired with you to answer questions, pair program, and check in often to see if you need help. The rest of the team will run training sessions on our culture, product, engineering process, and technical architecture.
Site Reliability Developer / SRE
WatchGuard Technologies, Inc.
Wymagania
- Python
- Java
- Go
- Docker
- Kubernetes
WatchGuard embraces a Flexible Work Philosophy. Most of our employees can choose to work from the office, at home, or any combination of the two. We’ve built a global workforce of outstanding team members and a flexible culture built on trust, collaboration, and belonging. Who you are: You are a customer-focused, data-driven developer who has a passion for delivering the best customer experience possible. You enjoy the thrill of coordinating and troubleshooting production issues and want to proactively find and fix issues. You have an understanding of cloud technologies, automation, everything-as-code, networking, microservice architectures, object-oriented design, SRE and DevOps cultures, proficiency in Python, Java, or Go programming and a desire to learn others. You come with proven knowledge of software engineering best practices for the full software development lifecycle including coding standards, code reviews, security, source control management, build processes, automated testing, deployment, monitoring, chaos engineering, and automated self-healing operations. As well as knowledge of tools and technologies like CloudFormation, Terraform, New Relic, Lambda, Serverless, Elasticsearch, Docker, Kubernetes, Spark, Flink, Jenkins, GitHub, Artifactory, Jira, etc. You are able to lead production incident response and postmortems through your strong analytical and problem-solving abilities as well as verbal and written communication skills. What to expect as a member of the SRE team in WatchGuard: The WatchGuard SRE team owns the reliability and security of our production cloud environments alongside our application development teams to ensure we deliver the best possible experience to our customers. As you learn more about our systems, you will be: * Ensuring smooth production operations with development teams and leading large-scale event response. * Defining operational and security policies, standards, and processes for our development teams to follow. * Guiding our development teams through the process of establishing, monitoring, and achieving their service level agreements through the definition of service level indicators and objectives. A Typical Day in the Life of a Site Reliability Developer, SRE Team at WatchGuard: As a SRE at WatchGuard, a “typical” day may have you: * Working side-by-side with our application teams in production AWS, Azure, and hybrid cloud environments to ensure proper monitoring, security, reliability, automation, and support are in place. * Driving an operational excellence culture throughout WatchGuard with the simplification, automation, analysis, and evolution of our activities and processes. * Championing security and operational best practices to become known as a cloud expert by the rest of our development teams located across the globe. * Striving to provide the best possible customer experience even when things go wrong by participating in our on-call rotation and then coordinating and leading the production troubleshooting efforts. * Using your programming skills to develop automation or assist with debugging and fixing complex production issues. * Being curious, learning new things, and then sharing your knowledge through documentation, presentations, and guidance to other teams.
Senior Software Engineer (.NET / Azure) | f/m/d
ERGO Technology & Services S.A.
⚲ Warsaw, Gdansk
Wymagania
- .NET
- C#
- Azure
- Terraform
- CQRS
- designing and developing backend services using a state of the art Microsoft Azure cloud based tech stack - following clean architecture principles and implementing appropriate patterns (e.g. CQRS) - taking care of code quality, participating in peer programming and code reviews - covering different areas of backend technology both infrastructure and code components (rest based API controllers, house-keeping services, CI/CD pipelines, data access, and database design) - working in Agile team, taking part in the corresponding meetings (daily, refinement, planning & review, retrospective) - presenting features developed to stakeholders
Senior Site Reliability Engineer
Bondora
⚲ Tallinn
Wymagania
- Kubernetes
- Docker
About Bondora At Bondora, our mission is clear: to empower people to enjoy life more while alleviating the stress of managing finances. Founded in 2008, Bondora has proudly served a diverse community of over 1 million customers for an impressive 16 years. As a rapidly growing financial technology company, we’re reaching new heights with a bold vision. We’re set to acquire a banking license, unlocking a world of possibilities for our customers. Our investment product is available Europe-wide, and we’re actively expanding our loan product footprint to 11 new countries. Join us on this journey and let’s build the future of finance together! What is this role about? As Site Reliability Engineer, you’re tasked with making sure that our infrastructure stays a couple of steps ahead of our ambitious growth plans. You’ll solve complex infrastructure problems and make those solutions available to engineering teams so that they can deploy with peace of mind and focus on customer problems. It’s a rare chance to be part of a product-driven culture where you make your own decisions, emphasis is on quality of solutions rather than outputs, and everyone strives to improve across practices, tools, technologies. You’ll enjoy a high degree of ownership, diverse set of problems (cloud, developer experience, databases, provisioning etc.) and an environment that values problem-solving and collaboration. This role is located in our Tallinn or Tartu office. Your main responsibilities will be: - Build a reliable platform for our products, making sure that up-time, monitoring and recovery meet up to the modern standard. - Standardized and automate infrastructure tasks so that engineers are able to self-serve and benefit from best practices - Build a developer experience that allows to ship without friction, confidently and daily, making best practices available to teams through tooling and automation. - Manage and optimize cloud infrastructure, including databases. - Respond to and solve production incidents.
Senior MultiCloud DevOps/ Platform Engineer with Harness
Xebia sp. z o.o.
⚲ Wrocław, Gdańsk, Rzeszów, Warszawa
22 000 - 30 000 PLN (B2B)
Wymagania
- Python
- Kubernetes
- Git
- Terraform
- Devops
- Harness
- Cloud
- AWS
O projekcie: Who We Are While Xebia is a global tech company, our journey in CEE started with two Polish companies – PGS Software, known for world-class cloud and software solutions, and GetInData, a pioneer in Big Data. Today, we’re a team of 1,000+ experts delivering top-notch work across cloud, data, and software. And we’re just getting started. What We Do We work on projects that matter – and that make a difference. From fintech and e-commerce to aviation, logistics, media, and fashion, we help our clients build scalable platforms, data and AI solutions, and cutting-edge applications to shape the future of tech. Our clients include McLaren, Aviva, Deloitte, Spotify, Disney, ING, UPS, Tesco, Truecaller, AllSaints, Volotea, Schmitz Cargobull, Allegro, InPost, and many, many more. We value smart tech, real ownership, and continuous growth. We use modern, open-source stacks, and we’re proud to be trusted partners of Databricks, dbt, Snowflake, Azure, GCP, and AWS. Fun fact: we were the first AWS Premier Partner in Poland! Beyond Projects What makes Xebia special? Our community. We support tech communities, organize meetups (Software Talks, Data Tech Talks), and have a culture that actively support your growth via Guilds, Labs, and personal development budgets — for both tech and soft skills. It’s not just a job. It’s a place to grow. What sets us apart? Our mindset. Our vibe. Our people. And while that’s hard to capture in text – come visit us and see for yourself. Wymagania: Your profile: - ready to work in EST time (overlap), - 5+ years in DevOps, Platform Engineering, or SRE roles, - 2+ years hands-on with Harness CI and/or CD, including pipelines-as-code, templates, governance, and rollout strategies, - strong experience with Kubernetes (operations, Helm/Kustomize, operators), - good proficiency with at least one major cloud (AWS, Azure, or GCP), - demonstrated expertise with Terraform, reusable modules, and multicloud provisioning (CloudFormation, ARM/Bicep), - hands-on experience with scripting (Bash, Python, or Go) and automation mindset, - experience with CI/CD and Git-based workflows, GitHub Actions or comparable CI tools, - familiarity with security integration (SAST/DAST, scanning, OPA/Conftest), - expertise with observability fundamentals (metrics, logs, traces), - ansible for configuration and orchestration, - upper intermediate/advanced English (B2/C1). Work from the European Union region and a work permit are required. Nice to have: - GitOps (Argo CD/Flux), - Harness Feature Flags, SRM, Chaos, or Cloud Cost Management, - Kafka experience (operational or integration), - Elasticsearch cluster operations, - Redis (caching, broker patterns, session management), - FinOps exposure, - familiarity with compliance frameworks (SOC2, ISO27001, HIPAA, PCI), - SRE experience with SLOs, SLIs, and error budgets, - PKI, vaulting, workload identity solutions. Codzienne zadania: - designing, implementing, and maintaining Harness pipelines for Kubernetes, ECS, serverless, and VM deployments, including canary/blue‑green strategies and automated rollbacks, - operating CI pipelines and shared build infrastructure, improving build performance and developer feedback loops, - configuring and managing Feature Flags to support progressive delivery and experimentation, - integrating Harness SRM/Chaos (if applicable) to support deployment verification, resilience testing, and error budget policies, - partnering with FinOps to leverage cost dashboards, budgets, and guardrails for cloud spend optimization; - creating reusable pipeline templates, governance controls, and “paved roads” for application teams, - implementing secrets management, artifact versioning, and environment promotion flows (dev → test → staging → prod), - standardizing infrastructure provisioning with Terraform, Helm/Kustomize, CloudFormation, and ARM/Bicep, - supporting Git-based workflows (GitHub, GitLab, Azure Repos, Bitbucket) and applying GitOps practices (Argo CD/Flux) where appropriate; - embedding automated tests, security scans (SAST, DAST, dependency/image scanning, SBOM), and quality gates into CI/CD pipelines, - enforcing RBAC, least privilege, SSO/SCIM, and audit readiness across platforms, - contributing incident response, post-incident reviews, and the continuous evolution of SLIs/SLOs, - integrating observability systems (Prometheus/Grafana, OpenTelemetry, Datadog, New Relic) into deployment verification and runtime dashboards, - optimizing reliability, building performance, caching, architecting storage, and runtime platform performance; - onboarding product and engineering teams onto the Harness platform, - running enablement workshops, producing documentation, and maintaining self-service resources, - measuring and reporting delivery metrics such as lead time, deployment frequency, change fail rate, and MTTR — and driving improvement initiatives based on these insights.
Senior DevOps Engineer
Delasport
⚲ Sofia
Wymagania
- Kubernetes
- Docker
- Terraform
- Ansible
- Kafka
About Us: Delasport is where technology meets the excitement of sports and gaming! As a leading powerhouse in iGaming software, we provide everything from platform development to player account management. Our platform is designed to make every player feel like the star of the show, with personalized experiences and cutting-edge technology at its heart. Since our founding in 2010, we have won multiple international awards and have established a proven track record of excellence, stability, and overall success. This achievement only drives us to reach even greater heights. Currently, we are proud to have over 500 experts across various business fields and several offices in Bulgaria (Sofia and Plovdiv), Ukraine, and Malta. Your Team: The DevOps team is responsible for designing and automating CI/CD pipelines, managing the company’s entire infrastructure, and deploying applications in Kubernetes. They handle cloud provisioning, ensure system consistency through configuration management, and maintain centralized logging and monitoring to ensure seamless, scalable operations. How You’ll Make an Impact: - Design and implement robust CI/CD pipelines for custom applications deployed in the public cloud. - Automate infrastructure provisioning and ongoing maintenance to ensure seamless operation within the cloud environment. - Develop, deploy, and manage distributed applications, setting up auto-scaling policies within a Kubernetes orchestration platform. - Implement and manage a real-time data streaming platform leveraging Apache Kafka for ingestion, processing, and delivery of critical business data. - Oversee centralized configuration management for both operating systems and deployed applications, ensuring consistency and reliability. - Maintain a centralized solution for logging and metric monitoring to enable visibility and proactive issue management. Employees may be required to participate in an on-call rotation, ensuring availability outside regular working hours to respond to urgent issues as needed as per the On call policy.
Senior DevOps & Architecture Engineer
Blink Tech Inc
Wymagania
- Kubernetes
- AWS
- Docker
- Jenkins
- Python
Senior DevOps & Architecture Engineer A Senior DevOps and architecture professional is responsible for establishing, maintaining, and optimizing the continuous integration and deployment (CI/CD) systems and practices. They collaborate closely with development, operations, and security teams to automate and streamline operational processes, ensuring system stability, scalability, and security. The ideal candidate holds a Bachelor's or Master's degree in a relevant field and has at least five years of experience in DevOps roles. They must be proficient in AWS, various CI/CD tools, and cloud platforms and possess excellent problem-solving and communication skills. Essential Duties and Responsibilities: - Design and implement CI/CD pipelines, aligning with software development and deployment processes. - Automate and optimize operational processes to reduce the risk of errors and increase efficiency. - Collaborate with development teams to understand software features and technical implementation, ensuring architecture alignment with business requirements. - Proactively monitor system performance and capacity management, ensuring system scalability, reliability, and efficiency. - Ensure system security by collaborating with security teams and implementing secure DevOps practices. - Conduct regular system audits to detect deviations from the established process, and implement corrective actions. - Lead the definition and implementation of cloud architecture, including containerization and orchestration systems. - Mentor junior team members, promoting a culture of continuous improvement and learning. - Maintain up-to-date knowledge of the latest industry trends and technologies. - Production Support & Incident Management: Respond to production issues, troubleshoot incidents, and ensure high availability of critical systems. Conduct root cause analysis (RCA) and post-mortems, implement corrective actions, and participate in on-call rotations. Utilize logging and monitoring tools for proactive detection of system anomalies.
IT System Engineer
GET IT TOGETHER SPÓŁKA Z OGRANICZONĄ ODPOWIEDZIALNOŚCIĄ
⚲ Gliwice, Warszawa
Wymagania
- VMware (nice to have)
- Doświadczenie na podobnym stanowisku
- Certyfikaty dziedzinowe (nice to have)
O firmie: - Jako Get IT Together specjalizujemy się w dostarczaniu wysoko wykwalifikowanych Ekspertów IT, w obszarach zarządzania projektami, architektury, analizy biznesowej i systemowej czy rozwoju oprogramowania. Zakres obowiązków: - Udział w spotkaniach z Klientami w ramach działań presalesowych, we współpracy z zespołem handlowym - Identyfikacja i analiza potrzeb biznesowych oraz technicznych Klientów - Opracowywanie koncepcji technicznych i architektury rozwiązań sprzętowo-programowych na potrzeby ofert i projektów - Realizacja projektów wdrożeniowych oraz współpraca z zespołem projektowym na wszystkich etapach realizacji (od projektu do uruchomienia rozwiązania) - Przygotowywanie dokumentacji technicznej oraz prowadzenie transferu wiedzy do zespołów Klienta - Stałe podnoszenie kwalifikacji oraz współpraca merytoryczna z producentami i partnerami technologicznymi Oferujemy: - Elastyczną formę zatrudnienia: B2B lub umowa o pracę - Model pracy hybrydowej w dogodnie zlokalizowanych biurach w Warszawie lub Gliwicach - Dostęp do szkoleń i możliwości certyfikacji - Pakiet benefitów (prywatna opieka medyczna, karta sportowa i inne) - Udział w innowacyjnych projektach z wykorzystaniem najnowszych technologii
IT System Engineer
GET IT TOGETHER SPÓŁKA Z OGRANICZONĄ ODPOWIEDZIALNOŚCIĄ
⚲ Gliwice, Warszawa
Wymagania
- VMware (nice to have)
- Doświadczenie na podobnym stanowisku
- Certyfikaty dziedzinowe (nice to have)
O firmie: - Jako Get IT Together specjalizujemy się w dostarczaniu wysoko wykwalifikowanych Ekspertów IT, w obszarach zarządzania projektami, architektury, analizy biznesowej i systemowej czy rozwoju oprogramowania. Zakres obowiązków: - Udział w spotkaniach z Klientami w ramach działań presalesowych, we współpracy z zespołem handlowym - Identyfikacja i analiza potrzeb biznesowych oraz technicznych Klientów - Opracowywanie koncepcji technicznych i architektury rozwiązań sprzętowo-programowych na potrzeby ofert i projektów - Realizacja projektów wdrożeniowych oraz współpraca z zespołem projektowym na wszystkich etapach realizacji (od projektu do uruchomienia rozwiązania) - Przygotowywanie dokumentacji technicznej oraz prowadzenie transferu wiedzy do zespołów Klienta - Stałe podnoszenie kwalifikacji oraz współpraca merytoryczna z producentami i partnerami technologicznymi Oferujemy: - Elastyczną formę zatrudnienia: B2B lub umowa o pracę - Model pracy hybrydowej w dogodnie zlokalizowanych biurach w Warszawie lub Gliwicach - Dostęp do szkoleń i możliwości certyfikacji - Pakiet benefitów (prywatna opieka medyczna, karta sportowa i inne) - Udział w innowacyjnych projektach z wykorzystaniem najnowszych technologii
Engineering Manager - Platform
bunch
Wymagania
- Kubernetes
- AWS
- Docker
- Terraform
- TypeScript
Engineering Manager - Platform bunch is building the backbone of private markets, combining exceptional expertise, operational excellence, and frictionless technology. The platform enables funds and private investors to set up and manage their investment entities seamlessly and securely. To expand our core team, we are hiring an Engineering Manager - Platform who wants to play a pivotal role in growing our product development team and transforming the private markets industry. Your primary responsibilities will be ensuring our engineering output delivers tangible value, as well as technical and architectural direction and team resources. You will be involved in shaping the future of the tech team as we grow to the next stage. We are focusing on Product Excellence and building sustainable tech processes. How the role contributes to the company vision - You will be at the core of building the technology that powers bunch. We aim to build something meaningful with a real-world impact - You will lead a platform team and have full accountability for delivering reliable, secure, and scalable infrastructure and tooling that enable product squads to solve real customer problems. What you will be doing - Lead, grow, and support the platform team through coaching, clear expectations, and a culture of ownership, reliability, and continuous improvement. - Own the platform strategy and roadmap, prioritising initiatives, managing team capacity, and ensuring strong execution and impact. - Provide technical direction for our AWS- and Kubernetes-based platform, guiding architectural decisions that balance speed, reliability, security, and cost. - Oversee cloud infrastructure and IaC practices, ensuring the team delivers secure, resilient, well-managed AWS environments. - Drive improvements to CI/CD and developer experience by setting direction, aligning stakeholders, and removing friction for product teams. - Champion observability and reliability, ensuring robust logging, metrics, tracing, and effective incident response practices. - Partner closely with product, engineering, security, and operations experts in the company to align standards, resolve cross-team issues, and support smooth service integration. - Maintain platform quality by steering tech-debt reduction, documentation, and alignment with security and compliance expectations. About bunch bunch is Europe’s leading tech-enabled fund administrator for VC, PE, and alternative assets. By combining AI-powered automation with deep domain expertise, we provide a single source of truth that replaces outdated, fragmented processes, and enables clients to master the entire fund lifecycle. Private markets are experiencing unprecedented growth; with alternative assets projected to reach $40 trillion by the end of the decade. To power this growth, we have raised €22.8 million to date—including our $15.5M Series A in July 2024—and are accelerating our mission to build the backbone of private markets in Europe. Founded in 2021 and headquartered in Berlin, bunch has expanded to Amsterdam, London, and Luxembourg, now supporting over 500 investment structures, 150 fund and asset managers, and more than 10,000 institutional investors. As we prepare for our next stage of growth, we are looking for ambitious talent to continue redefining this financial category.
DevOps Engineer (Databricks) | f/m/d
ERGO Technology & Services S.A.
⚲ Warsaw, Gdansk
Wymagania
- Azure
- Terraform
- PowerShell
- Ansible
The Big Data & Analytics Platform (BDAP) program is a comprehensive, multi-year initiative designed to establish and operate our enterprise analytics infrastructure. The program predominantly utilizes Microsoft Azure components and is deeply integrated into our broader enterprise environment. Additional key features of the program include advanced tools like Databricks, RStudio, Alation and more, which serve to enhance and expand our data analytics capabilities. How you will get the job done - managing deployment automation of Azure PaaS/SaaS services within the context of a Big Data & Analytics platform - carrying out automation of configuration validation and re-establishment to comply with IT security requirements, and ensuring platform stability - implementing end-to-end solution automations for the complete machine learning lifecycle - conducting root cause analysis of complex issues reported by the support team - active supporting of the Product Owner and Solution Architect for further product development
DevOps - Wolontariusz / Wolontariuszka
NEURON FOUNDATION
⚲ Cienin Kościelny (pow. słupecki)
Wymagania
- AWS
- Terraform
- 1 rok doświadczenia jako Devops
- Ukończone 18 lat
- Komunikatywność
Zakres obowiązków: - Automatyzacja procesów - Zarządzanie infrastrukturą - Zarządzanie środowiskami chmurowymi - Monitoring i reagowanie na incydenty - Współpraca z zespołami developerskimi Oferujemy: - Elastyczny czas pracy - Notion w wersji premium - Darmowy dostęp do Chat GPT w wesji 4.5 turbo with visual - Darmowy dostęp do DALL-E - Karty podarunkowe i konta premium za zaangażowanie - Elastyczna forma stażu - Darmowa Canva PRO - Darmowy Copilot na Githubie - Nauka j. angielskiego i hiszpańskiego
Data Engineer
Monq Ltd
Wymagania
- Python
- AWS
- Azure
- Spark
- Kafka
Data Engineer - Monq We are looking for brilliant engineers to join our team at Monq. We're pushing the boundaries of AI with next-generation agentic systems that can autonomously negotiate high-value enterprise contracts. We're focusing on a $4.2 trillion market of strategic procurement that has been largely untouched by AI innovation. Our mission is to transform how enterprises negotiate their most critical deals, turning strategic procurement from a manual, gut-driven process into a competitive advantage powered by intelligent AI agents. About the Role Data Engineers are the backbone of Monq. They design, build, and maintain the scalable data infrastructure that powers our autonomous negotiation agents. You'll work closely with AI engineers, product teams, and procurement domain experts to transform complex contract data, vendor information, and negotiation histories into structured, reliable insights that drive intelligent decision-making across our platform. We're creating the data foundation for AI agents to parse multi-dimensional deal terms, benchmark contract conditions, and optimise negotiation outcomes in high-stakes enterprise environments. This role is an opportunity to work on deeply technical challenges with real-world impact in Fortune 500 procurement operations. What You Will Do Build Enterprise Contract Data Infrastructure: - Design and operate performant, scalable ingestion pipelines processing high-volume contract data, vendor databases, and procurement system integrations from Fortune 500 enterprises - Create sophisticated data parsing systems for complex legal/procurement documents, multi-dimensional deal terms, and procurement workflow data - Build real-time data feeds supporting autonomous negotiation agents and strategic decision-making dashboards Manage Strategic Procurement Data Architecture: - Define, evolve, and manage data schemas for contract intelligence, vendor benchmarking, and negotiation performance analytics—from raw contract staging to high-quality feature stores for AI agents - Build comprehensive contract data catalogues ensuring discoverability and lineage tracking for complex procurement datasets - Design data models that support multi-dimensional optimisation across price, terms, risk, timeline, and relationship factors Ensure Enterprise-Grade Data Operations: - Build end-to-end monitoring and observability for mission-critical negotiation data pipelines: owning data quality, latency, completeness, and lineage for $1M+ contract processes - Champion secure, governed data practices for sensitive enterprise contract information: advanced access controls, secrets management, encrypted data-in-transit/at-rest, and compliance with enterprise security frameworks - Implement robust audit trails and data governance for high-stakes negotiation processes Enable AI-Powered Negotiation: - Collaborate closely with AI, Platform, and Product teams, provisioning datasets, feature tables, and contracts that power autonomous negotiation agents at scale - Build data infrastructure supporting real-time contract analysis, vendor research automation, and strategic negotiation planning - Create data pipelines that integrate with major procurement systems (SAP, Oracle, Coupa, GEP) and enterprise workflows Drive Operational Excellence: - Continuously improve efficiency and reliability via comprehensive testing, CI/CD automation, cost/performance tuning for enterprise-scale deployments - Lead incident response and root-cause analysis for data systems supporting critical procurement negotiations - Optimise data infrastructure costs while maintaining enterprise-grade performance and reliability standards About Monq Monq is building the first AI platform specifically designed for strategic procurement negotiation. We're creating a blue ocean in a $4.2 trillion market that has been ignored by existing AI solutions. Our team brings together deep technical expertise in AI with proven experience in enterprise software and procurement. We're backed by forward-thinking investors who understand the massive opportunity in applying advanced AI to high-value B2B negotiations. With several major enterprises joining us, we're positioned to become the category-defining platform for AI-powered procurement. We are founded and backed by Revolut and HSBC executives. Why This Matters: Every 1% improvement in strategic procurement represents a $42 billion market impact. We're not just building software - we're creating the future of how businesses negotiate their most critical deals.
Cloud Data Architect
ITHOUSE SPÓŁKA Z OGRANICZONĄ ODPOWIEDZIALNOŚCIĄ
⚲ Warszawa
170 - 200 zł netto (+ VAT)
Wymagania
- Microsoft Azure
- Google Cloud Platform
O firmie: - Cześć, miło nam, że tu jesteś! 👋 - Jesteśmy ITHOUSE i od ponad 7 lat łączymy specjalistów IT z projektami, które naprawdę mają znaczenie i zmieniają sposób, w jaki Polacy korzystają z technologii na co dzień. - To nie jest utarty zwrot! Współpracujemy z instytucjami publicznymi i firmami komercyjnymi, pomagając modernizować systemy, z których korzystają miliony osób. Dzięki zespołom, które budujemy, coraz mniej spraw wymaga biegania z kwitkami po urzędach, a procesy stają się prostsze, szybsze i bardziej dostępne. Ale to nie wszystko! Realizujemy też ambitne projekty R&D, m.in. dla Samsunga, gdzie technologia spotyka się ze sztuczną inteligencją i innowacją. - Staramy się dopasować projekt do osoby: do doświadczenia,preferencji i tego, co naprawdę je motywuje. Jeśli czujesz, że chcesz robić sensowne rzeczy i mieć realny wpływ, jest spora szansa, że to może być to! - W naszym procesie stawiamy na partnerskie relacje, transparentność i wzajemny szacunek. Rekrutacja to dla nas nie tylko proces, to rozmowa o tym, co dla Ciebie ważne i gdzie możesz się rozwinąć. Zakres obowiązków: - Zarządzanie modelem danych i połączeniami do istniejących repozytoriów. - Projektowanie i rozwój dashboardów Power BI na potrzeby raportowania zarządczego. - Utrzymanie oraz modyfikację istniejących raportów i wizualizacji. - Dostosowywanie dashboardów do zmieniających się potrzeb informacyjnych interesariuszy. - Zapewnienie spójności danych, logiki biznesowej oraz warstwy wizualnej raportów. - Bieżące wsparcie analityczne w zakresie Power BI. Oferujemy: - Stabilne zatrudnienie na podstawie umowy B2B. - Wynagrodzenie w wysokości 170 - 200 zł/h netto + VAT. - Pracę w modelu 100% zdalnym.
Backend/DevOps Engineer
Pragmatike
Wymagania
- Kubernetes
- Docker
- AWS
- GCP
- Go
About the Role Pragmatike is hiring on behalf of a fast-growing AI and trading startup building the first orchestration protocol that unifies all trading dimensions under a single AI-powered layer. The platform enables seamless trading across crypto and traditional financial markets using institutional-grade, AI-driven strategies. The company is revolutionizing algorithmic trading through AI-powered autonomous agents that operate 24/7, removing emotional bias and optimizing execution. During its beta phase, the platform has already managed $500,000+ in assets and is scaling rapidly to support thousands of agents across multiple trading venues. We are looking for a Backend / DevOps Engineer to own and scale the infrastructure, security, and operational backbone of the platform as it grows from early traction to institutional scale. What Youll Do - Design, build, and operate scalable production infrastructure using Docker, Kubernetes, and AWS/GCP - Own deployment architecture and runtime environments for execution nodes and backend services - Build and maintain secure key management systems for API keys, wallets, and exchange integrations - Implement monitoring, logging, alerting, and incident response for mission-critical systems - Set up and maintain CI/CD pipelines and developer workflows (GitHub Actions) - Optimize infrastructure for reliability, security, performance, and cost - Collaborate closely with backend engineers supporting execution, settlement, and receipts systems - Ensure platform stability as the system scales to thousands of autonomous trading agents - Contribute to infrastructure best practices, automation, and operational documentation
Devops Engineer
DCG sp. z o.o.
⚲ Kraków
19 000 - 22 000 PLN (B2B)
Wymagania
- Linux
- Docker
- Ansible
- Jenkins
- GCP
- Terraform
- Network protocols
Wymagania: - Bardzo dobra znajomość systemów Linux - Doświadczenie w pracy z narzędziami CI/CD (Jenkins, GitLab, ArgoCD, Helm) - Praktyczna znajomość rozwiązań IaaC (Terraform, Ansible) i systemów kontroli wersji (Git) - Dobra znajomość relacyjnych baz danych (MySQL) - Dobra znajomość zagadnień sieciowych (IP, TCP/UDP, Ethernet, routing, IPSec) oraz najpopularniejszych protokołów sieciowych: HTTP, DNS, SMTP/POP/IMAP, FTP, SSH, LDAP, itp. - Co najmniej 3 lata doświadczenia na analogicznym stanowisku - Dobra znajomość Kubernetesa oraz konteneryzacji w systemach Linux (Docker) - Doświadczenie z chmurą publiczną (GCP/AWS/Azure) - Umiejętność pisania skryptów (Bash, Python) - Znajomość zagadnień z dziedziny balansowania ruchu i wysokiej dostępności (np. HAproxy, Keepalived) Mile widziane: - Znajomość zagadnień związanych z wirtualizacją (XEN,KVM) - Znajomość baz typu NoSQL (MongoDB) - Doświadczenie w pracy z macierzami NetApp, znajomość technologii z dziedziny storage (NFS, iSCSI, SAN, NAS) Codzienne zadania: - Projektowanie systemów i infrastruktury IT w tym rozwiązań opartych o chmurę obliczeniową oraz kontenery - Współpraca z zespołami tworzącymi aplikacje mobilne i WWW (praca w metodyce DevOps) - Instalacja, konfiguracja i administracja systemami operacyjnymi - Monitoring systemów i sieci - Rozwiązywanie problemów, wdrażanie usprawnień w działającej infrastrukturze - Tworzenie dokumentacji technicznej
Senior DevOps Automation Engineer
CodiLime
⚲ Warszawa
22 000 - 31 000 PLN (B2B)
Wymagania
- DevOps
- Terraform
- Ansible
- Prometheus
- Grafana
- AWS
- Amazon EKS
- Linux
- Docker
- Git
- GitLab CI/CD
- ArgoCD
- Helm
- Infrastructure as Code
- SQL (nice to have)
- NoSQL (nice to have)
- Python (nice to have)
- OOP (nice to have)
O projekcie: Get to know us better CodiLime is a software and network engineering industry expert and the first-choice service partner for top global networking hardware providers, software providers and telecoms. We create proofs-of-concept, help our clients build new products, nurture existing ones and provide services in production environments. Our clients include both tech startups and big players in various industries and geographic locations (US, Japan, Israel, Europe). While no longer a startup - we have 250+ people on board and have been operating since 2011 we’ve kept our people-oriented culture. Our values are simple: - Act to deliver. - Disrupt to grow. - Team up to win. The project and the team We are a Developer team creating a network orchestration framework. This framework models, automates, and manages the full life cycle, including deployment, of Data Center networking and services on any environment. By collaborating with us, you’ll engage with a globally recognized hi-tech company and have a significant impact on their modern, flagship solution. Our Engineers are involved in the whole product life cycle: design, PoC, demo, and deployment. We work on multiple interesting projects at the same time, so it may happen that we’ll invite you to the interview for another project if we see that your competencies and profile are well-suited for it. More reasons to join us - Flexible working hours and approach to work: fully remote, in the office, or hybrid - Professional growth supported by internal training sessions and a training budget - Solid onboarding with a hands-on approach to give you an easy start - A great atmosphere among professionals who are passionate about their work - The ability to change the project you work on Wymagania: Do we have a match? As a Senior DevOps Automation Engineer, you must meet the following criteria: - Minimum 7 years of professional experience - Experience working with CI/CD systems, preferably with GitlabCI and ArgoCD - Good knowledge of Kubernetes(including Helm) with experience in writing operators and GitOps patterns - Proven hands on experience with Infrastructure as Code(IaC) tools&services such as: Terraform and Ansible - Hands-on experience with setup and maintenance of observability tools such as: Prometheus, Grafana, and OpenTelemetry - Experience working with AWS Platform services, especially with EKS - Experience with Linux and Docker - Experience with release lifecycle management - Experience with Git - Knowledge of computer networks - Communication skills, proficiency in English (min. B2 level), ability to collaborate on technical solutions Beyond the criteria above, we would appreciate the nice-to-haves: - Familiarity with SQL and NoSQL databases - Experience with network source-of-truth (SoT) platforms such as Nautobot, Netbox, and/or similar - Hands-on experience with scripting in Python (OOP is not a must) - Ability to collaborate in distributed teams Codzienne zadania: - Analyzing customer requirements based on High Level Design and preparing tailored CI/CD pipelines and infrastructure - Automating and orchestrating the deployment of virtualized solutions in private or public cloud - Working in agile methodology and collaborating with the rest of the team (Product Owners, Scrum Masters, DevOps Engineers, Solution Architects) - Mentoring teammates and sharing knowledge
Azure Cloud Architect
Accenture Polska
⚲ Warsaw, Krakow, Gdansk, Wroclaw
Wymagania
- Kubernetes
- AKS
- PowerShell
- Python
As an Azure Cloud Architect in our Accenture Cloud First team, you will help enterprise clients design and evolve secure, scalable and cost‑efficient cloud landscapes, supporting their digital transformation journeys. Your work will include: - Designing end‑to‑end architectures for cloud, hybrid and on‑premises environments on Microsoft Azure, including Azure Landing Zones, governance and security models. - Defining and implementing compute environments (autoscaling, load balancing, serverless solutions), Azure database services and CI/CD pipelines, with integration to existing on‑premises services. - Designing cloud‑native networking (VNETs, peering, firewalls, container networking) and security & compliance controls (IAM, resource hierarchy, data security, cost management and billing). - Implementing monitoring, logging, profiling and alerting solutions to ensure reliability and observability of client platforms. - Preparing migration approaches and onboarding plans to Azure, including effort estimation, cost and resource calculations, and optimization scenarios. - Partnering with DevOps and engineering teams to apply Infrastructure as Code and automation practices, improve delivery pipelines and support a mature DevOps culture. - Leading or supporting incident, request, change and problem management activities, acting as a trusted technical advisor for client stakeholders. Flexible: The work location for this role may include a mix of working remotely, onsite at a client or in an Accenture office - depending on specific project circumstances. With all our roles, there is some in-person time for collaboration, learning and building relationships with clients, peers, leaders, and communities. As an employer, we will be as flexible as possible to support your specific work/life needs.
AWS DevOps (Terraform) Internship Program (She/He/They)
Accenture Polska
⚲ Lodz, Katowice, Wroclaw, Gdansk, Szczecin
Wymagania
- AWS
- Jenkins
- Git
Accenture is a leading global professional services company that helps the world’s leading businesses, governments and other organizations build their digital core, optimize their operations, accelerate revenue growth and enhance citizen services. We offer solutions and assets across Strategy & Consulting, Technology, Operations, Industry X and Accenture Song. Advanced Technology Center unites experts in web and mobile solutions, delivering global projects across banking, telecom, e-commerce, and many more indusTtries. Through close collaboration and exchange of experiences between developers and designers, the team focuses on crafting innovative solutions for our clients. For more information about Advanced Technology Center please visit our website. THE WORK: - You will work on projects involving CI/CD configuration and infrastructure installation for solutions built in the AWS cloud, while also considering security and scalability. - You will be responsible for analyzing and fixing bugs, configuring tools, and preparing infrastructure for new projects.
AWS Backend Developer (Python/Node.js) Internship Program (She/He/They) (Evergreen) (Open)
Accenture Polska
⚲ Lodz, Katowice, Warsaw, Wroclaw, Gdansk, Szczecin
Wymagania
- Python
- Node.js
- AWS
- Git
- Terraform
Accenture is a leading global professional services company that helps the world’s leading businesses, governments and other organizations build their digital core, optimize their operations, accelerate revenue growth and enhance citizen services. We offer solutions and assets across Strategy & Consulting, Technology, Operations, Industry X and Accenture Song. Advanced Technology Center unites experts in web and mobile solutions, delivering global projects across banking, telecom, e-commerce, and many more indusTtries. Through close collaboration and exchange of experiences between developers and designers, the team focuses on crafting innovative solutions for our clients. THE WORK: - You will work on projects involving CI/CD configuration and infrastructure installation for solutions built in the AWS cloud, while also considering security and scalability. - You will be responsible for analyzing and fixing bugs, configuring tools, and preparing infrastructure for new projects.
AWS Backend Developer with Python/Node.js
Accenture Polska
⚲ Lodz
Wymagania
- Python
- Node.js
- Terraform
- Lambda
Accenture is a leading global professional services company that helps the world’s leading businesses, governments, and other organizations build their digital core, optimize their operations, accelerate revenue growth, and enhance citizen services. We offer solutions and assets across Strategy & Consulting, Technology, Operations, Industry X, and Accenture Song. THE WORK: - You will be part of the AWS Backend team responsible for implementing business logic and building cloud infrastructure on AWS with a strong focus on security and scalability. - You will analyze and resolve defects, enhance existing functionalities, and develop new backend components. - You will work with AWS Serverless services including Lambda, Step Functions, and API Gateway. - You will apply best practices related to AWS cloud security, REST API design, and Infrastructure‑as‑Code (Terraform). Flexible: The work location for this role may include a mix of working remotely, onsite at a client or in an Accenture office – depending on specific project circumstances. With all our roles, there is some in‑person time for collaboration, learning and building relationships with clients, peers, leaders, and communities. As an employer, we will be as flexible as possible to support your specific work/life needs.
DevSecOps Engineer
Antal Sp. z o.o.
⚲ Kraków
Wymagania
- DevOps
- Jenkins
- CI/CD
- Maven
- Terraform
- Groovy
- English
Lokalizacja: KrakówTryb pracy: hybrydowy Poszukujemy DevSecOps Engineera do międzynarodowej instytucji finansowej. Osoba na tym stanowisku będzie odpowiedzialna za rozwój i utrzymanie platformy pipeline opartej na Jenkinsie, zapewniając bezpieczeństwo i optymalizację procesów w CI/CD. Zakres obowiązków: • Projektowanie i utrzymywanie kroków pipeline w Groovy (kompilacja, testowanie, pakowanie, skanowanie, wdrożenie).• Rozszerzanie narzędzi Python do generowania SLSA provenance oraz SBOM.• Optymalizacja wydajności pipeline’ów, w tym równoległe budowanie, buforowanie i przycinanie zależności.• Refaktoryzacja starszych skryptów i zapewnienie integralności artefaktów.• Dokumentowanie standardów ci-config.yaml oraz mentoring w zakresie bezpiecznego rozwoju pipeline’ów. Wymagania: • Około 5 lat doświadczenia inżynierskiego; 3 lata w CI/CD lub DevSecOps.• Zaawansowana znajomość Jenkins i Groovy.• Doświadczenie z Pythonem, Maven, NPM, Helm i Terraform.• Znajomość narzędzi do bezpieczeństwa łańcucha dostaw: SonarQube, Sonatype IQ, SAST.• Doświadczenie w optymalizacji wydajności pipeline’ów. Mile widziane: • Doświadczenie z GitOps i automatyzacją wydania.• Znajomość podpisywania artefaktów (cosign, OCI). Oferujemy: • Pracę w międzynarodowym środowisku o globalnym zasięgu.• Praca hybrydowa.• Pakiet benefitów (prywatna opieka medyczna, karta Multisport). Why apply for an Antal job offer? When your application is successful, you will be supported by a dedicated Consultant who will stay in regular contact with you (via email or phone), help you prepare for interviews with your future employer, and ensure a smooth and professional recruitment process. About Antal Antal is a leading recruitment and HR advisory company, present in Poland since 1996 and later expanded to the Czech Republic and Hungary. Across the CEE region, we employ around 150 professionals who deliver a full range of services – from specialist and executive recruitment, employee outsourcing and HR consulting, to employer branding and market research. Our division-based structure combines deep industry expertise with functional specialisation, enabling us to provide tailored solutions for companies in every sector. We act as a trusted partner for both employers and candidates, sharing our knowledge and guiding them through every stage of the talent journey. We connect exceptional people with the right opportunities and help organisations build successful teams.
Solution Architect
Addepto
⚲ Warszawa, Kraków, Wrocław, Katowice, Gdańsk
25 200 - 33 600 PLN netto (B2B)
Wymagania
- WebLogic
- Architecture
- Azure
- English
- Kubernetes
- Kafka
- Java
- Oracle
- RESTful API
- Event Streaming
Addepto is a leading AI consulting (https://addepto.com/ai-consulting/) and data engineering (https://addepto.com/data-engineering-services/) company that builds scalable, ROI-focused AI solutions for some of the world's largest enterprises and pioneering startups, including Rolls Royce, Continental, Porsche, ABB, and WGU. With an exclusive focus on Artificial Intelligence and Big Data, Addepto helps organizations unlock the full potential of their data through systems designed for measurable business impact and long-term growth. The company's work extends beyond client engagements. Drawing from real-world challenges and insights, Addepto has developed its own product - ContextClue - and actively contributes open-source solutions to the AI community. This commitment to transforming practical experience into scalable innovation has earned Addepto recognition by Forbes as one of the top 10 AI consulting companies worldwide. As part of KMS Technology, a US-based global technology group, Addepto combines deep AI specialization with enterprise-scale delivery capabilities—enabling the partnership to move clients from AI experimentation to production impact, securely and at scale. We are looking for a visionary and hands-on Solution Architect to design and guide the implementation of robust, scalable, and secure software solutions. This role requires a deep understanding of both modern and legacy architectures, with a strong emphasis on integrating disparate systems and leveraging cloud-native technologies. The ideal candidate will bridge the gap between business requirements and technical execution, ensuring our solutions are aligned with strategic goals and best practices. 🚀 In this position, you will: • Lead the design and development of end-to-end architectural solutions, ensuring alignment with enterprise architecture standards, business requirements, and technology strategy. • Provide technical leadership and guidance to development teams throughout the project lifecycle, from conception to deployment. • Evaluate and recommend appropriate technologies, tools, and platforms to solve complex business problems, considering both cost and long-term viability. • Design and oversee the integration of diverse systems, including legacy platforms (WebLogic, IBM MQ) and modern cloud services. • Champion Domain-Driven Design principles to ensure solutions are modular, maintainable, and reflect core business domains. • Drive the adoption and optimal utilization of Microsoft Azure services, including Application Gateway and Redis, ensuring high availability, scalability, and security. • Design and implement event streaming solutions using Kafka, fostering asynchronous communication and real-time data processing. • Leverage Kubernetes for container orchestration, ensuring efficient deployment, scaling, and management of microservices. • Define performance and scalability requirements and ensure architectural designs meet these non-functional criteria. • Create and maintain comprehensive architectural documentation, including design specifications, diagrams, and technical standards. • Mentor and guide junior architects and developers, fostering a culture of technical excellence and continuous improvement. • Effectively communicate complex technical concepts to both technical and non-technical stakeholders. 🎯 What you’ll need to succeed in this role: • At least 5+ years of proven commercial experience with Java development and strong proficiency in Oracle database technologies, including performance tuning and schema design. • Extensive experience in designing and implementing solutions across both modern, cloud-native architectures and legacy systems. • Hands-on experience with Microsoft Azure services, including but not limited to Azure Compute, Networking, Storage, Azure Application Gateway, and Azure Cache for Redis. • Event streaming and messaging experience, strong understanding, and practical experience with Kafka for building event-driven architectures, as well as traditional messaging systems like IBM MQ. • Solid experience with Kubernetes for deploying and managing containerized applications. • Experience with WebLogic Server for deploying and managing enterprise Java applications. • Expert knowledge of Domain-Driven Design (DDD) principles and their application in real-world projects. • Experience designing RESTful APIs and other integration patterns. • Exceptional analytical and problem-solving skills with the ability to troubleshoot complex issues across distributed systems. • Excellent verbal and written communication skills, with the ability to articulate complex technical concepts clearly and concisely. • Proven ability to lead and influence technical teams and stakeholders. • Fluent English (C1 level) is a must. 🎁 Discover our perks and benefits: • Work in a supportive team of passionate enthusiasts of AI & Big Data. • Engage with top-tier global enterprises and cutting-edge startups on international projects. • Enjoy flexible work arrangements, allowing you to work remotely or from modern offices and coworking spaces. • Accelerate your professional growth through career paths, knowledge-sharing initiatives, language classes, and sponsored training or conferences, including a partnership with Databricks, which offers industry-leading training materials and certifications. • Choose your preferred form of cooperation - B2B or a contract of mandate - and enjoy 20 fully paid days off • Participate in team-building events and utilize the integration budget. • Celebrate work anniversaries, birthdays, and milestones. • Access medical and sports packages, eye care, and well-being support services, including psychotherapy and coaching. • Get full work equipment for optimal productivity, including a laptop and other necessary devices. • With our backing, you can boost your personal brand by speaking at conferences, writing for our blog, or participating in meetups. • Experience a smooth onboarding with a dedicated buddy, and start your journey in our friendly, supportive, and autonomous culture. Are you interested in Addepto and would like to join us? Get in touch! We are looking forward to receiving your application. Would you like to know more about us? Visit our website (career page) and social media (Facebook, LinkedIn, Instagram).
Senior/Lead Software Engineer
Addepto
⚲ Warszawa, Kraków, Wrocław, Katowice, Gdańsk
20 000 - 28 650 PLN netto (B2B)
Wymagania
- WebLogic
- Architecture
- Azure
- English
- Kubernetes
- Kafka
- Java
- Oracle
- RESTful API
- Event Streaming
Addepto is a leading AI consulting (https://addepto.com/ai-consulting/) and data engineering (https://addepto.com/data-engineering-services/) company that builds scalable, ROI-focused AI solutions for some of the world's largest enterprises and pioneering startups, including Rolls Royce, Continental, Porsche, ABB, and WGU. With an exclusive focus on Artificial Intelligence and Big Data, Addepto helps organizations unlock the full potential of their data through systems designed for measurable business impact and long-term growth. The company's work extends beyond client engagements. Drawing from real-world challenges and insights, Addepto has developed its own product - ContextClue - and actively contributes open-source solutions to the AI community. This commitment to transforming practical experience into scalable innovation has earned Addepto recognition by Forbes as one of the top 10 AI consulting companies worldwide. As part of KMS Technology, a US-based global technology group, Addepto combines deep AI specialization with enterprise-scale delivery capabilities—enabling the partnership to move clients from AI experimentation to production impact, securely and at scale. We are looking for a visionary and hands-on Senior/Lead Software Engineer to design and guide the implementation of robust, scalable, and secure software solutions. This role requires a deep understanding of both modern and legacy architectures, with a strong emphasis on integrating disparate systems and leveraging cloud-native technologies. The ideal candidate will bridge the gap between business requirements and technical execution, ensuring our solutions are aligned with strategic goals and best practices. 🚀 In this position, you will: • Lead the design and development of end-to-end architectural solutions, ensuring alignment with enterprise architecture standards, business requirements, and technology strategy. • Provide technical leadership and guidance to development teams throughout the project lifecycle, from conception to deployment. • Evaluate and recommend appropriate technologies, tools, and platforms to solve complex business problems, considering both cost and long-term viability. • Design and oversee the integration of diverse systems, including legacy platforms (WebLogic, IBM MQ) and modern cloud services. • Champion Domain-Driven Design principles to ensure solutions are modular, maintainable, and reflect core business domains. • Drive the adoption and optimal utilization of Microsoft Azure services, including Application Gateway and Redis, ensuring high availability, scalability, and security. • Design and implement event streaming solutions using Kafka, fostering asynchronous communication and real-time data processing. • Leverage Kubernetes for container orchestration, ensuring efficient deployment, scaling, and management of microservices. • Define performance and scalability requirements and ensure architectural designs meet these non-functional criteria. • Create and maintain comprehensive architectural documentation, including design specifications, diagrams, and technical standards. • Mentor and guide junior architects and developers, fostering a culture of technical excellence and continuous improvement. • Effectively communicate complex technical concepts to both technical and non-technical stakeholders. 🎯 What you’ll need to succeed in this role: • At least 5+ years of proven commercial experience with Java development and strong proficiency in Oracle database technologies, including performance tuning and schema design. • Extensive experience in designing and implementing solutions across both modern, cloud-native architectures and legacy systems. • Hands-on experience with Microsoft Azure services, including but not limited to Azure Compute, Networking, Storage, Azure Application Gateway, and Azure Cache for Redis. • Event streaming and messaging experience, strong understanding, and practical experience with Kafka for building event-driven architectures, as well as traditional messaging systems like IBM MQ. • Solid experience with Kubernetes for deploying and managing containerized applications. • Experience with WebLogic Server for deploying and managing enterprise Java applications. • Expert knowledge of Domain-Driven Design (DDD) principles and their application in real-world projects. • Experience designing RESTful APIs and other integration patterns. • Exceptional analytical and problem-solving skills with the ability to troubleshoot complex issues across distributed systems. • Excellent verbal and written communication skills, with the ability to articulate complex technical concepts clearly and concisely. • Proven ability to lead and influence technical teams and stakeholders. • Fluent English (C1 level) is a must. 🎁 Discover our perks and benefits: • Work in a supportive team of passionate enthusiasts of AI & Big Data. • Engage with top-tier global enterprises and cutting-edge startups on international projects. • Enjoy flexible work arrangements, allowing you to work remotely or from modern offices and coworking spaces. • Accelerate your professional growth through career paths, knowledge-sharing initiatives, language classes, and sponsored training or conferences, including a partnership with Databricks, which offers industry-leading training materials and certifications. • Choose your preferred form of cooperation - B2B or a contract of mandate - and enjoy 20 fully paid days off • Participate in team-building events and utilize the integration budget. • Celebrate work anniversaries, birthdays, and milestones. • Access medical and sports packages, eye care, and well-being support services, including psychotherapy and coaching. • Get full work equipment for optimal productivity, including a laptop and other necessary devices. • With our backing, you can boost your personal brand by speaking at conferences, writing for our blog, or participating in meetups. • Experience a smooth onboarding with a dedicated buddy, and start your journey in our friendly, supportive, and autonomous culture. Are you interested in Addepto and would like to join us? Get in touch! We are looking forward to receiving your application. Would you like to know more about us? Visit our website (career page) and social media (Facebook, LinkedIn, Instagram).
Site Reliability Engineer with English
Capgemini Polska
⚲ Gdańsk, Katowice, Kraków, Poznań, Wrocław, Opole, Lublin, Warszawa
Wymagania
- AWS
- Google Cloud Platform
- OpenStack
- Ansible
- Chef
- GitHub
- GitLab
- Microsoft Azure Cloud
- Bitbucket
- Puppet
Choosing Capgemini means choosing a company where you will be empowered to shape your career in the way you’d like, where you’ll be supported and inspired by a collaborative community of colleagues around the world, and where you’ll be able to reimagine what’s possible. Join us and help the world’s leading organizations unlock the value of technology and build a more sustainable, more inclusive world. YOUR TASKS • Infrastructure Management – design, implement, and manage cloud infrastructure within Azure, including IaaS and PaaS offerings. • Automation – develop and maintain Infrastructure as Code (IaC) to automate provisioning and management of Azure resources. • Monitoring and Optimization – implement and maintain monitoring solutions (e.g., Azure Monitor) to track system health, performance, and resource utilization. • Incident Response – respond to incidents, identify root causes, and deploy fixes to resolve issues. • Collaboration – work with cross-functional teams (Dev, Ops, etc.) to improve systems, processes, and overall service reliability. • Performance Tuning – identify and implement solutions to optimize cloud resource performance and cost efficiency. • Security – design and implement secure cloud environments, following best practices and security frameworks. • Documentation and Training – document procedures, policies, and best practices, and potentially mentor junior engineers. YOUR PROFILE You need to have: • Experience in large scale IT environments. • Experience working with Azure Cloud technologies (nice to have other e.g. AWS and/or GCP) Experience in provisioning automation and configuration (e.g. Puppet, Ansible, Chef, OpenStack). • Experience with version control tools (e.g. GitHub, Bitbucket, GitLab) CI/CD tools (e.g. Bamboo, Jenkins). • Proficiency in at least one of scripting language (e.g. Python, Bash, Perl, PowerShell etc.). • Strong understanding of network infrastructure and security concepts (e.g. Terraform, Biceps, etc.) Nice to have (not required) • Experience in provisioning automation and configuration using minimum 1 of following: Puppet, Ansible, Chef, OpenStack Container management, development (Docker, Kubernetes) WHAT YOU’LL LOVE ABOUT WORKING HERE • Well-being culture: medical care with Medicover, private life insurance, and Medicover Sport card. But we went one step further by creating our own Capgemini Helpline offering therapeutical support if needed and the educational podcast "Let's talk about wellbeing" which you can listen to on Spotify. • Access to over 70 training tracks with certification opportunities (e.g., GenAI, Excel, Business Analysis, Project Management) on our NEXT platform. Dive into a world of knowledge with free access to Education First languages platform, Pluralsight, TED Talks, Coursera and Udemy Business materials and trainings. • Enjoy hybrid working model that fits your life - after completing onboarding, connect work from a modern office with ergonomic work from home, thanks to home office package (including laptop, monitor, and chair). Ask your recruiter about the details. • Continuous feedback and ongoing performance discussions thanks to our performance management tool GetSuccess supported by a transparent performance management policy. GET TO KNOW US Capgemini is committed to diversity and inclusion, ensuring fairness in all employment practices. We evaluate individuals based on qualifications and performance, not personal characteristics, striving to create a workplace where everyone can succeed and feel valued. Do you want to get to know us better? Check our Instagram — @capgeminipl or visit our Facebook profile — Capgemini Polska. You can also find us on TikTok! — @capgeminipl. ABOUT CAPGEMINI Capgemini is a global leader in partnering with companies to transform and manage their business by harnessing the power of technology. The Group is guided everyday by its purpose of unleashing human energy through technology for an inclusive and sustainable future. It is a responsible and diverse organization of over 360,000 team members globally in more than 50 countries. With its strong 55-year heritage and deep industry expertise, Capgemini is trusted by its clients to address the entire breadth of their business needs, from strategy and design to operations, fueled by the fast evolving and innovative world of cloud, data, AI, connectivity, software, digital engineering and platforms. Apply now!
DevSecOps Consultant with French
Capgemini Polska
⚲ Gdańsk, Katowice, Kraków, Poznań, Wrocław, Opole, Lublin, Warszawa
Wymagania
- Zalenium
- AWS
- DevOps
- Jenkins
- CI/CD
- DevSecOps
- Selenium
- Gitlab CI
- Azure DevOps
- SonarQube
Choosing Capgemini means choosing a company where you will be empowered to shape your career in the way you’d like, where you’ll be supported and inspired by a collaborative community of colleagues around the world, and where you’ll be able to reimagine what’s possible. Join us and help the world’s leading organizations unlock the value of technology and build a more sustainable, more inclusive world. YOUR TASKS We are seeking a highly skilled and experienced DevSecOps Subject Matter Expert (SME) to lead the integration of security practices into our software development and operations processes. This role requires deep technical expertise in DevSecOps tools, cloud platforms, automation, and secure software development lifecycle (SDLC) methodologies. • Lead the design, implementation, and optimization of DevSecOps pipelines. • Integrate security controls into CI/CD workflows and infrastructure as code (IaC). • Collaborate with development, operations, and security teams to ensure secure software delivery. • Conduct threat modeling, vulnerability assessments, and risk analysis. • Automate security testing and compliance checks. • Provide guidance on secure coding practices and DevSecOps best practices. • Evaluate and implement DevSecOps tools and technologies. • Mentor and train teams on DevSecOps principles and secure development. • Support audits and compliance initiatives YOUR PROFILE • 5+ years of experience in DevOps, DevSecOps, or related roles. • Strong knowledge of CI/CD tools (e.g., Jenkins, GitLab CI, Azure DevOps). • Experience with security tools (e.g., SonarQube). • Proficiency with DevSecOps tools: Selenium, Zalenium • Proficiency in cloud platforms (AWS, Azure, GCP) and containerization (Docker, Kubernetes). • Familiarity with IaC tools (Terraform, Ansible, CloudFormation). • Deep understanding of secure SDLC, OWASP Top 10, and threat modeling. • Excellent communication and leadership skills. Nice to have: • Bachelor’s or Master’s degree in Computer Science, Cybersecurity, or related field; • Additional certifications in cloud technologies or DevOps are a plus. WHAT YOU’LL LOVE ABOUT WORKING HERE • Well-being culture: medical care with Medicover, private life insurance, and Medicover Sport card. But we went one step further by creating our own Capgemini Helpline offering therapeutical support if needed and the educational podcast "Let's talk about wellbeing" which you can listen to on Spotify. • Access to over 70 training tracks with certification opportunities (e.g., GenAI, Excel, Business Analysis, Project Management) on our NEXT platform. Dive into a world of knowledge with free access to Education First languages platform, Pluralsight, TED Talks, Coursera and Udemy Business materials and trainings. • Enjoy hybrid working model that fits your life - after completing onboarding, connect work from a modern office with ergonomic work from home, thanks to home office package (including laptop, monitor, and chair). Ask your recruiter about the details. • Continuous feedback and ongoing performance discussions thanks to our performance management tool GetSuccess supported by a transparent performance management policy. GET TO KNOW US Capgemini is committed to diversity and inclusion, ensuring fairness in all employment practices. We evaluate individuals based on qualifications and performance, not personal characteristics, striving to create a workplace where everyone can succeed and feel valued. Do you want to get to know us better? Check our Instagram — @capgeminipl or visit our Facebook profile — Capgemini Polska. You can also find us on TikTok! — @capgeminipl. ABOUT CAPGEMINI Capgemini is a global leader in partnering with companies to transform and manage their business by harnessing the power of technology. The Group is guided everyday by its purpose of unleashing human energy through technology for an inclusive and sustainable future. It is a responsible and diverse organization of over 360,000 team members globally in more than 50 countries. With its strong 55-year heritage and deep industry expertise, Capgemini is trusted by its clients to address the entire breadth of their business needs, from strategy and design to operations, fueled by the fast evolving and innovative world of cloud, data, AI, connectivity, software, digital engineering and platforms. Apply now!
DevSecOps
PKO BP Finat
⚲ Warszawa, Lublin
Wymagania
- MSSQL server
- Linux
- Red Hat
Do struktur banku PKO BP poszukujemy Administratora Bezpieczeństwa / DexSecOps. Forma współpracy: B2B Tryb pracy: hybrydowo (2 dni w tygodniu w biurze Warszawa/ Lublin) Opis stanowiska W ramach pracy zespołu dbamy o zapewnienie jak najwyższego poziomu dostępności, wydajności i bezpieczeństwa systemów oraz aplikacji. Z nami będziesz brać udział w projektowaniu architektury i strojeniu serwisów Grupy Kapitałowej PKO Banku Polskiego. Wymagania • znajomość systemów Linux (RedHat Enterprise Linux, Debian ), Windows Serwer • umiejętność administrowania relacyjnymi bazami danych: MS SQL Server, PostgreSQL,Oracle • umiejętność planowania, utrzymania i rozwoju środowisk aplikacyjnych i bazodanowych • umiejętność utrzymania, konfiguracji i rozwoju narzędzi CI/CD • znajomość procesów wytwarzania oprogramowania i automatyzacji • monitorowanie dostępności, wydajności i pojemności infrastruktury i aplikacji • znajomość zagadnień telekomunikacyjnych, VoIP • znajomość rozwiązań związanych z wirtualizacją i konteneryzacją • znajomość ogólnej wiedzy technicznej z zakresu telekomunikacji, infrastruktury i oprogramowania• mile widziane doświadczenie w utrzymaniu systemów Call Center
DevOps Engineer | Hybrid Krakow
DCV Technologies
⚲ Kraków
Wymagania
- Nexus
- Ansible
- SDLC
- Mongo DB
- Java
- SQL
- MS Azure Cloud
DevOps Engineer 📌 We are seeking a highly experienced DevOps Engineer on behalf of our client to join to the leader from the largest Swiss financial holding company. 📍Work mode: hybrid - 3 days a week working from the client's office in Kraków • Overall 6+ years of IT experience including 4+ years of experience developing Java applications (Spring framework would be a plus) • Solid SQL database skills (ideally Oracle and experience working with Mongo DB would be a plus) • Should be able to develop and maintain Java based applications • Should be able to drive continuous integration and improvements • Should be able to drive architecture, standards and guidelines for a globally deployed software solution • Familiar with DevOps tools, such as Git, GitLab Pipelines, Nexus, Ansible, RPM packaging would be beneficial • Familiarity with MS Azure Cloud platforms and knowledge of Databricks, Denodo would be beneficial • Comfortable in a Linux based server environment incl. bash (shell) scripting • Good understanding of all aspects of the application development (SDLC) and support life cycle • Well-developed business communication skills – both written and verbal and experience working as part of a global team • Strong personal prioritization and time management • Strong analytical, problem-solving and synthesizing skills (you know how to figure things out) • Knowledge of UBS SDLC platforms and processes including as REX (release executor); the RFC process and GSNOW/Atlas shall be preferred ✅ Benefits: • Work with modern technologies and international teams • Long-term and stable cooperation • Professional growth opportunities 📩 If you’re interested and meet the qualifications, please send your CV to Alina Pchelnikova at alina.pchelnikova@dcvtechnologies.co.uk