Oferty pracy
Filtry
Tester Manualny / Automatyzujący (Hurtownie Danych) (m/k/n)
UPVANTA SPÓŁKA Z OGRANICZONĄ ODPOWIEDZIALNOŚCIĄ
⚲ Wrocław, Fabryczna
Wymagania
- SQL
Nasze wymagania: Minimum 2 lata doświadczenia jako tester manualny lub automatyzujący Praktyczne doświadczenie w pracy z hurtowniami danych oraz testowaniem dużych wolumenów danych (warunek konieczny) Wykształcenie wyższe (preferowane: informatyka, matematyka, ekonomia lub kierunki ścisłe) Znajomość metodologii testowania i podstaw inżynierii oprogramowania Podstawowa znajomość relacyjnych baz danych Znajomość SQL (pisanie zapytań) Podstawy programowania (dowolny język wysokiego poziomu) Znajomość języka angielskiego umożliwiająca pracę z dokumentacją techniczną Umiejętność logicznego myślenia i rozwiązywania problemów Komunikatywność i umiejętność pracy zespołowej Odpowiedzialność i dobra organizacja pracy Umiejętność pracy pod presją czasu Mile widziane: Doświadczenie w sektorze bankowym Znajomość narzędzi CI/CD i automatyzacji procesów O projekcie: Stawka: 500-600 MD B2B Szukamy testera manualnego (z udokumentowanymi pierwszymi doświadczeniami w automatyzacji) lub testera automatyzującego, który posiada praktyczne doświadczenie w pracy z hurtowniami danych – duże wolumeny i doświadczenie w ich testowaniu to warunek konieczny. Zakres obowiązków: Projektowanie przypadków testowych oraz przygotowywanie danych testowych Przeprowadzanie testów oraz dokumentowanie ich wyników Identyfikowanie, analizowanie i raportowanie błędów Projektowanie i tworzenie atrap (mocków) Raportowanie ryzyk projektowych Automatyzacja testów z wykorzystaniem narzędzi takich jak Cucumber, X-Ray, Gherkin Testowanie procesów przetwarzania dużych zbiorów danych w hurtowniach danych Oferujemy: Udział w projektach związanych z przetwarzaniem dużych zbiorów danych Możliwość rozwoju w obszarze testów automatycznych i Data Engineering Pracę w doświadczonym zespole specjalistów Elastyczną formę współpracy
Senior Data Quality and Governance Analyst
Jones Lang LaSalle & Tétris
⚲ Warszawa
Wymagania
- SQL
Nasze wymagania: 4+ years in enterprise data governance/management across multiple systems Strong client relationship management and business analysis experience Deep knowledge of data lifecycle, governance frameworks, and database management Fluent English with executive presence Strategic influencer who gains buy-in and drives change at all organizational levels Trusted advisor with exceptional listening, negotiation, and facilitation skills Self-motivated leader with strong ownership and independent thinking Technical translator who bridges data complexity with business-friendly communication Familiar with Tableau/PowerBI and SQL (business user), advanced Excel Zakres obowiązków: Establish and drive data governance program, including policies, standards, and frameworks that win business buy-in Supporting the Account in knowledge management by establishing and documenting data flows and processes Lead Master Data Management strategy and oversee critical data flows Build a mature Data Stewardship program and democratize data knowledge across the organization Partner with IT, BI, and business leaders to solve complex data challenges Advise clients on governance best practices for their entire data landscape Identify areas for improvement and recommend solutions regarding data quality and processes, Create Data Dictionaries and other metadata and processes documentation to bring clarity to complexity Oferujemy: Competitive employment contract with stability and security you deserve Comprehensive benefits package including premium private healthcare, flexible cafeteria system (Multisport), for personalized perks, and life insurance for peace of mind Clear career progression pathways with long-term growth opportunities and internal mobility across diverse roles and markets Premium workspace experience in the iconic Warsaw Spire with state-of-the-art facilities, convenient metro access, plus flexible hybrid working arrangements for optimal work-life balance Professional growth environment where your contributions directly impact business outcomes and where innovation and expertise are recognized and rewarded
Senior Data Platform Engineer
Inbank
⚲ Gdańsk
Wymagania
- DBT
- Python
- Snowflake
- Airflow
- Amazon AWS
- ELT
Hello from Inbank! If you’ve ever bought something in three instalments or financed your gadget purchase online or in a shop, there’s a good chance you’ve used something we built. We’re not here to be another bank. At Inbank, we’re a modular finance platform that helps businesses grow and customers pay with ease. We believe banking should feel invisible, an effortless layer that powers every great digital experience. Today, our financing rails are woven into the flow of 6,000+ leading retailers, giving around half a million customers a one-tap way to pay while turbo-charging merchants’ growth. There are already 440+ of us across Estonia, Latvia, Lithuania, Poland, and Czechia, and we’re continuing to grow as we help thousands of retailers expand their business and reach millions of shoppers. We’re looking for a Senior Data Platform Engineer to join our Data Warehouse team and help shape and scale our cloud data platform. This is a hands-on senior individual contributor role for someone who can design, build, and operate reliable data pipelines — and, most importantly, bring strong senior-level dbt expertise to the team. What you’ll be doing? • Owning and evolving core data platform capabilities such as CI/CD, environment management, observability, and access control to ensure the platform is scalable, reliable, and easy to build on. • Building and owning data pipelines and transformations on our Snowflake-based data platform. • Leading the design and implementation of dbt-based ELT, including scalable modelling, testing, and documentation. • Developing orchestration, ingestion, and automation solutions using Airflow and Python. • Improving developer experience and raising engineering standards across testing, deployment, maintainability, and platform operations. • Partnering with analytics and business stakeholders to translate requirements into reliable, secure, production-grade datasets. What we’re really looking for? This role is for a Senior Engineer with strong hands-on experience in dbt. We’re looking for someone who has worked with dbt in production and is excited to help evolve and scale best practices across the team. Ideally, you’d have: • Solid experience building and operating modern data platforms on AWS using Snowflake, dbt, Airflow, and Python. • Senior-level mastery of dbt: designing modular models, enforcing testing standards, structuring projects for scale, improving documentation, and making sound architectural decisions. • Deep understanding of data modelling, ELT design, warehouse performance optimisation, and production-grade data engineering. • Proven experience owning production data pipelines and making pragmatic technical trade-offs. • A quality-focused mindset: you care about reliable, well-tested, well-documented, and trusted data. • A security-focused mindset: you design and operate data platforms with attention to access control, data protection, change management, and the needs of a regulated environment. If you’re excited about this role but don’t meet every requirement, we still encourage you to apply — we value diverse experiences and perspectives and believe great people grow into great roles. What you’ll get in return? • A competitive salary tailored to your experience, along with a comprehensive benefits package. • Wellbeing support through sports compensation or additional health insurance to help you stay active and healthy. • Extra vacation days after your third year, giving you more time to rest and recharge. • A 6-week paid sabbatical after four years, recognising strong performance and long-term contribution. • A dynamic and inspiring work environment where you’re encouraged to grow and take ownership of your work. • Flexibility through a hybrid and autonomous way of working, built on trust and accountability. • The opportunity to collaborate with talented international colleagues across multiple markets. • Regular team events and additional perks that make work more enjoyable and help celebrate successes together. Sounds like a fit? Apply, and our recruitment team will be in touch. If you’re unsure but curious, apply anyway, we’d be happy to explore the fit with you.
Senior Data Engineer (Data Governance / Tool Selection)
best HR and PM solutions
⚲ Cracow, Warszawa, Wrocław, Poznań, Gdańsk
200 - 240 PLN/h netto (B2B)
Wymagania
- Airflow
- Apache Kafka
- Data modeling
- SQL
- Python
- Data Integration
- Apache Spark
To jedna z największych transformacji systemowych realizowanych w dużej organizacji o rozbudowanym środowisku IT (kilka tysięcy specjalistów technologicznych). Program obejmuje: • Wdrożenie zintegrowanego systemu klasy ERP • Budowę i standaryzację architektury integracyjnej (ESB / middleware) • Implementację platformy BPM • Transformację obszaru Data Management (MDM, migracje, governance) To inicjatywa o wysokim znaczeniu strategicznym – z ekspozycją na poziomie zarządu i realnym wpływem na architekturę całej organizacji. Twoja rola Poszukujemy doświadczonego Data Engineera, który dołączy do projektu w jego początkowej fazie, koncentrując się na obszarze Data Governance - w szczególności na analizie, doborze i przygotowaniu wdrożenia narzędzia klasy Data Governance (np. Collibra, Ataccama ONE). Rola ma charakter konsultacyjny na start (part-time / advisory), z możliwością przejścia do pełnego zaangażowania w kolejnych etapach programu. Zakres odpowiedzialności • Analiza potrzeb organizacji w obszarze Data Governance oraz rekomendacja narzędzia (np. Collibra, Ataccama ONE, DataHub, Google Cloud Dataplex) • Wsparcie procesu wyboru narzędzia (analiza, porównanie, warsztaty, rekomendacje architektoniczne) • Przygotowanie podejścia do wdrożenia Data Governance (data catalog, data quality, data lineage) • Projektowanie i budowa pipeline’ów danych (batch / streaming) • Integracja danych z systemów ERP, BPM oraz innych systemów źródłowych • Implementacja mechanizmów Data Quality i walidacji danych • Udział w projektowaniu modeli danych (w tym MDM) • Wsparcie analizy i mapowania przepływów danych (data lineage) • Współpraca z architektami, analitykami oraz zespołami integracyjnymi • Wsparcie migracji danych i transformacji systemów legacy Wymagania • 5+ lat doświadczenia jako Data Engineer • Doświadczenie w pracy w złożonych środowiskach systemowych (integracje, wiele źródeł danych) • Bardzo dobra znajomość SQL oraz Python • Doświadczenie z narzędziami typu Spark, Airflow, Kafka (lub podobne) • Doświadczenie w projektowaniu pipeline’ów danych (ETL / ELT) • Praktyczna znajomość zagadnień Data Governance / Data Quality / MDM • Doświadczenie w pracy z narzędziami Data Governance (np. Collibra, Ataccama ONE, DataHub, Google Cloud Dataplex) Kluczowe: • Doświadczenie w analizie, porównaniu i/lub wyborze narzędzia Data Governance / Data Catalog • Umiejętność oceny rozwiązań pod kątem architektury, integracji oraz potrzeb organizacji • Zrozumienie obszarów: data catalog, data lineage, data quality, data ownership • Umiejętność pracy blisko architektury i podejmowania decyzji technicznych Mile widziane • Doświadczenie w projektach ERP (np. SAP, Oracle) • Znajomość architektury event-driven • Doświadczenie w środowisku multi-vendor • Doświadczenie w chmurze (AWS / GCP / Azure) Dlaczego warto • Udział w jednym z największych programów transformacyjnych • Realny wpływ na wybór narzędzi i kierunek Data Governance w organizacji • Praca blisko decyzji architektonicznych i strategicznych • Możliwość wejścia w rolę ekspercką / leadową w kolejnych etapach projektu
Senior Data Engineer
Mindbox Sp. z o.o.
⚲ Kraków
27 300 - 33 600 PLN (B2B)
Wymagania
- MS SQL Server
- SQL
- ETL
- Azure Data Factory
- SSRS
- Azure Data Lake
- ADLS
- Performance tuning
- Data pipelines
- Data models
- Power BI
- Azure Databricks (nice to have)
- Spark (nice to have)
- Cloud (nice to have)
- .NET (nice to have)
- API (nice to have)
- SFTP (nice to have)
- IaC (nice to have)
- ARM (nice to have)
- Bicep (nice to have)
- Terraform (nice to have)
O projekcie: At Mindbox we connect top IT talents with technology projects for leading enterprises across Europe. This role is part of a strategic banking transformation initiative focused on modernizing approximately a dozen .NET applications and MSSQL databases into a modular, cloud-native Azure architecture. The goal is to build a modern, data-centric platform that centralizes data flows, reporting, and analytics to support enterprise decision-making and regulatory requirements. The environment includes ASP.NET MVC/Razor, legacy Angular, WPF, WinForms, Windows Services, SSIS, and modern Azure services. Sounds like your kind of challenge? What you get in return - Flexible cooperation model – choose the form that suits you best(B2B, employment contract, etc.) - Hybrid work setup – remote days available depending on the client’s arrangements (3 days per week in Kraków office) - Collaborative team culture – work alongside experienced professionals eager to share knowledge - Continuous development – access to training platforms and growth opportunities - Comprehensive benefits – including Interpolska Health Care, Multisport card, Warta Insurance, and more - High quality equipment – laptop and essential software provided Note: Detailed project information will be shared during the recruitment process. Wymagania: - Experience: 5+ years as a Data Engineer or similar role. - Databases: Strong hands-on experience with MS SQL Server / Azure SQL. - ETL / ELT: Proven experience designing, building, and migrating ETL/ELT using Azure Data Factory and SSIS. - Reporting: Experience with SSRS and Crystal Reports; ability to modernize legacy reporting. - Azure Data Services: Practical experience with Azure Data Factory and Azure Data Lake / ADLS Gen2. - Data Warehouse Design: Solid understanding of dimensional modelling and data warehouse concepts. - SQL & Performance: Advanced SQL skills, including complex queries and performance tuning. - Architecture & Documentation: Ability to design end-to-end data solutions and produce SDD. - Data Pipelines: Experience building enterprise-scale pipelines for batch and streaming. - Governance & Security: Knowledge of data quality, governance, and regulatory compliance. - Collaboration: Experience integrating data from multiple sources and working with backend teams. - Power BI: Familiarity with Power BI data models and performance optimization. Nice-to-have - Azure Databricks & Apache Spark for large-scale processing. - Experience migrating legacy ETL/reporting to the cloud. - Knowledge of .NET ecosystem and API integration patterns. - Experience with secure data sources (CyberArk, SFTP). - Streaming or near-real-time data processing. - CI/CD for data platforms and Infrastructure as Code (Terraform, ARM, Bicep). - Cloud cost optimization strategies. Joining this project you’ll become part of Mindbox – a tech-driven company where consulting, engineering, and talent meet to build meaningful digital solutions. We’ll back you up every step of the way, accelerate your development, and ensure your skills make a difference. Codzienne zadania: - Design & build data pipelines: Implement and maintain enterprise-scale ETL/ELT pipelines using Azure Data Factory and related Azure services. - Migrate & modernize ETL: Upgrade legacy ETL/ELT processes (including SSIS) and migrate pipelines to the cloud. - Batch & streaming processing: Develop and operate both batch and near-real-time/streaming workflows. - Data warehouse & lake design: Design and evolve Data Warehouse and Data Lake architectures (dimensional modelling, star/snowflake schemas). - Database performance & operations: Optimize and manage MS SQL Server / Azure SQL for reliability and performance. - Reporting & integration support: Modernize reporting stacks (SSRS, Crystal Reports) and integrate data from APIs, files, and secure sources (e.g., CyberArk, SFTP). - Collaboration: Work closely with .NET/API teams to enable secure and reliable data flows. - Architecture & documentation: Contribute to solution and data architecture and maintain Solution Design Documentation (SDD). - Data quality & governance: Ensure compliance with data quality, governance, and banking security standards. - Analytics support: Assist with Power BI data models, refresh strategies, and performance tuning.
Senior Cloud Engineer
Kuehne+Nagel
⚲ Wrocław
Wymagania
- AWS
- Bash
- Terraform
- Python
As a Senior Cloud Engineer, you will be responsible for the development and maintenance of cloud landing zones that form the foundation of our cloud infrastructure. Working closely with development teams, you will ensure that our environments are scalable, secure, and aligned with best practices, enabling the reliable delivery and operation of cloud-native solutions. How you create impact • Develop and deploy sophisticated cloud architectures and solutions that meet the organization's technical and business requirements. Ensure these solutions are scalable, reliable, and secure. • Design robust and scalable cloud architectures that address specific business needs. Implement complex solutions involving multiple cloud services and integrations • Implement and enforce best practices for cloud security and compliance. Ensure that cloud environments meet regulatory and industry standards. • Monitor and analyse cloud performance metrics. Make recommendations for optimizing resource usage and cost management, including implementing autoscaling and load balancing strategies. • Lead the troubleshooting and resolution of complex issues related to cloud infrastructure and applications. Provide expert support and guidance for resolving critical incidents. • Create and maintain detailed documentation for cloud architectures, processes, and configurations. Generate reports on system performance, security, and incidents. What we would like you to bring • Deep understanding of AWS. Expertise in advanced services and features like container orchestration (e.g., Kubernetes), serverless computing, and complex networking setups. • Proficiency in programming languages commonly used in cloud environments, such as Bash or Python. • Experience with IaC tools such as Terraform, AWS CloudFormation to automate and manage infrastructure provisioning and configuration. • Skills in optimizing cloud resources for performance, scalability, and cost-efficiency. Experience with load balancing, autoscaling, and performance monitoring. • Familiarity with cloud monitoring and logging tools (e.g., AWS CloudWatch, Azure Monitor) to track and analyze system performance. • Advanced knowledge of cloud security best practices, including identity and access management (IAM), encryption, and vulnerability management. • In-depth understanding of cloud networking concepts such as Virtual Private Clouds (VPCs), subnets, VPNs, and network security. • Ability to lead the resolution of complex issues related to cloud infrastructure and applications. Strong troubleshooting skills and experience handling critical incidents. • Skills in creating and maintaining comprehensive documentation for cloud architectures, processes, and configurations. • Communication skills in English, both written and oral. What's in it for you Your work will have a direct influence on the future of logistics and IT. Your colleagues are experts who shape the IT industry in Poland and worldwide. We want you to feel comfortable in your working surroundings. Therefore, we are offering you flexible working hours and a mobile working arrangement. We value teamwork, continuous learning and diversity. Kuehne + Nagel is offering you a custom-made career through individual planning and supervisors who support and advice you in every way. Personally, you get supported by a lot of work-life balance offerings and supplementary health insurance.
Mobile Software Engineer (Android)
Allegro
⚲ Poznań
Wymagania
- Git
- Android
- CI/CD
- Kotlin
- RESTful APIs
Mobile Software Engineer (Android) Location: Poznań, PL, 61-569 Company: Allegro sp. z o.o.Team: Technology Contract Type: Employee Main Responsibilities As a Mobile Software Engineer, you will be a key contributor to the development of new features and improvements for our mobile applications. You will work closely with product owners, designers, and other engineers to translate user needs into robust, high-performance mobile experiences. This role is ideal for someone with solid experience in mobile development, a strong understanding of software engineering principles, and a desire to grow their expertise in a fast-paced e-commerce environment. What you will do: • Design, develop, test, and deploy features for the Allegro Android, ensuring high quality, performance, and responsiveness. • Collaborate with product, design, and other engineering teams to define and implement innovative solutions. • Participate in code reviews, contribute to technical discussions, and help maintain high code quality standards. • Identify and resolve performance and scalability issues. • Continuously improve our development processes, tools, and infrastructure. • Share knowledge and mentor junior engineers, contributing to the growth of the team. • Stay up-to-date with the latest mobile technologies and industry trends. We are looking for someone who has: • 3+ years of professional experience in native Android (Kotlin) mobile application development. • Strong understanding of mobile application architecture patterns (e.g., MVVM, MVI, Clean Architecture). • Experience with RESTful APIs, asynchronous programming, and efficient data handling. • Proficiency in writing unit and integration tests. • Familiarity with version control systems (e.g., Git). • Solid understanding of software engineering principles, data structures, and algorithms. • Experience with continuous integration and continuous delivery (CI/CD) pipelines. • Good communication skills in English and Polish, both written and spoken. • A proactive approach to problem-solving and a passion for creating excellent user experiences. Bonus points if you have: • Experience with large-scale, high-traffic mobile applications. • Knowledge of reactive programming (e.g., RxJava/RxKotlin, Combine). • Familiarity with Jetpack Compose and Coroutines. . • Experience with A/B testing frameworks. • Contributions to open-source projects or a strong GitHub profile. • Experience working in an Agile/Scrum environment. What we offer: • Flexible working hours in the hybrid model (4/1) - our hours can start between 7:00 a.m. and 10:00 a.m. We also have 30 days of occasional remote work. • Annual bonus based on your annual performance and company results. • Well-located offices (with e.g. fully equipped kitchens, bicycle parking, terraces full of greenery) and excellent work tools (e.g., raised desks, ergonomic chairs, interactive conference rooms). • A 16" or 14" MacBook Pro or corresponding Dell with Windows (if you don't like Macs) and all the necessary accessories. • A wide selection of fringe benefits in a cafeteria plan - you choose what you like (e.g., medical, sports or lunch packages, insurance, purchase vouchers). • English classes that we pay for related to the specific nature of your job. • A training budget, inter-team tourism (more here), hackathons, and an internal learning platform where you will find multiple trainings. • An additional day off for volunteering, which you can use alone, with a team, or with a larger group of people connected by a common goal. • Social events for Allegro people - Spin Kilometers, Family Day, Fat Thursday, Advent of Code, and many other occasions we enjoy. And that's just the beginning! You can read more about the benefits here. #goodtobehere means that: • You will join a team you can count on - we work with top-class specialists who have sharing knowledge and experience in their DNA. • You will love our level of autonomy in team organization, the space for continuous development, and the opportunity to try new things. You get to choose which technology solves the problem and you are responsible for what you create. • You will value our Developer Experience and the full platform of tools and technologies that make creating software easier. We rely on an internal ecosystem based on self-service and widely used tools such as Kubernetes, Docker, Consul, GitHub, and GitHub Actions. Thanks to this, you can contribute to Allegro from your very first days on the job. • You will be equipped with modern AI tools to automate repetitive tasks, allowing you to focus on developing new services and refining existing ones (also leveraging AI support). • You will create solutions that will be used (and loved!) by your friends, family and millions of our customers. • You will meet the Allegro Scale, which starts with over 1000 microservices, an open-source data bus (Hermes) with 300K+ rps, a Service Mesh with 1M+ rps, tens of petabytes of data, and production-used machine learning. • You will become part of Allegro Tech - We speak at industry conferences, cooperate with tech communities, run our own blog (it's been over 10 years!), record podcasts, lead guilds, and we organize our own internal conference - the Allegro Tech Meeting. We create solutions we love (and can) to talk about! Send us your CV and… see you at Allegro!
Microsoft Automation Specialist
Arbiter S.A.
⚲ Wrocław
Wymagania
- Python
Nasze wymagania: praktyczna znajomość ekosystemu Microsoft 365 (Power Automate, SharePoint, Excel, Power Apps – mile widziane), podstawy programowania w Pythonie (skrypty, automatyzacje, praca z plikami Excel/CSV), umiejętność tworzenia automatyzacji i integracji narzędzi, znajomość Microsoft Copilot oraz wykorzystania AI do pracy operacyjnej, analityczne myślenie i samodzielność w rozwiązywaniu problemów. Mile widziane: doświadczenie w automatyzacji procesów w firmie. Zakres obowiązków: tworzenie i rozwijanie przepływów automatyzacji w Microsoft Power Automate, SharePoint, Outlook, Excel itp., wykorzystanie Microsoft Copilot oraz narzędzi AI do usprawniania codziennych procesów i tworzenia automatycznych rozwiązań, integracje systemów oraz automatyzacja zadań między narzędziami Microsoft 365, analiza procesów biznesowych i proponowanie ulepszeń, wsparcie pracowników w zakresie korzystania z narzędzi Microsoft 365, tworzenie dokumentacji procesowej i instrukcji, przygotowywanie skryptów w Pythonie wspierających obróbkę danych i automatyzacje. Oferujemy: pracę w dynamicznie rozwijającej się firmie, możliwość realnego wpływu na usprawnienia i automatyzacje procesów, dostęp do nowoczesnych narzędzi (AI, Copilot, Power Platform), elastyczną formę współpracy i przyjazną atmosferę, rozwój kompetencji technologicznych i szkolenia.
Manager, Machine Learning
Allegro
⚲ Poznań
Wymagania
- Machine Learning
- ETL
- Team Management
- ELT
- Product Management
Job Description: As a Machine Learning Manager, you will lead a team of engineers dedicated to maximizing the business impact of Allegro Ads through sophisticated data and ML engineering. You will play a pivotal role in ensuring the stability and continuous evolution of the online data pipelines that power our advertising ecosystem. By overseeing feature engineering and the development of robust model-serving services, you will provide the essential engineering backbone for our ML models. This is the right job for you if you: • Have experience in leading and mentoring a team of engineers (MLEs/DEs), fostering their professional growth and high engagement. • Possess a broad technical background in the full software and ML lifecycle - from system design to deployment and production support. • Are experienced in efficient ETL/ELT pipeline design and ensuring data integrity and high availability. • Have a passion for bridging Data Engineering workflows with ML requirements, specifically regarding Feature Store integration and model serving infrastructure. • Can act as a key partner for Product Management, aligning technical roadmaps with business priorities. • Are skilled in managing technical debt while balancing the delivery of new features in high-scale environments. • Know English at B2 level or higher. In your daily work you will handle the following tasks: • Team Leadership: Managing a team of 6 engineers • Technical Delivery: Overseeing requirements engineering, code reviews, automated testing, and robust monitoring (SLA/SLO) for real-time advertising systems. • Data & ML Strategy: Taking ownership of the evolution of attribution logic and data processing efficiency. • Operational Excellence: Maintaining high SLA and low latency for online pipelines processing massive volumes of ad-related data (clicks/views) in real-time. • Stakeholder Management: Translating complex business needs into actionable technical backlogs and managing executive expectations. • Strategic Ownership: Managing infrastructure costs (GCP) effectively and addressing legacy components within the Ads Data/ML domain. What's in it for you: • Flexible working hours in the hybrid model (4/1) - working hours start between 7:00 a.m. and 10:00 a.m. We also have 30 days of occasional remote work. • Long term discretionary incentive plan based on Allegro.eu shares (restricted stock units). • Annual bonus based on your annual performance and company results. • Well-located offices (with e.g. fully equipped kitchens, bicycle parking, terraces full of greenery) and excellent work tools (e.g., raised desks, ergonomic chairs, interactive conference rooms). • A 16" or 14" MacBook Pro or corresponding Dell with Windows (if you don't like Macs) and all the necessary accessories. • A wide selection of fringe benefits in a cafeteria plan - you choose what you like (e.g., medical, sports or lunch packages, insurance, purchase vouchers). • English classes that we pay for related to the specific nature of your job. • A training budget, inter-team tourism, hackathons, and an internal learning platform where you will find multiple trainings. • An additional day off for volunteering, which you can use alone, with a team, or with a larger group of people connected by a common goal. • Social events for Allegro people - Spin Kilometers, Family Day, Fat Thursday, Advent of Code, and many other occasions we enjoy. #goodtobehere means that: • You will join a team you can count on - we work with top-class specialists who have knowledge- and experience-sharing in their DNA. • You will love our level of autonomy in team organization, the space for continuous development, and the opportunity to try new things. • You get to choose which technology solves the problem and you are responsible for what you create. • You will value our Developer Experience and the full platform of tools and technologies that make creating software easier. • We rely on an internal ecosystem based on self-service and widely used tools such as Kubernetes, Docker, Consul, GitHub, and GitHub Actions. • Thanks to this, you can contribute to Allegro from your very first days on the job. • You will be equipped with modern AI tools to automate repetitive tasks, allowing you to focus on developing new services and refining existing ones (also leveraging AI support). • You will create solutions that will be used (and loved!) by your friends, family and millions of our customers. • You will meet the Allegro Scale, which starts with over 1000 microservices, an open-source data bus (Hermes) with 300K+ rps, a Service Mesh with 1M+ rps, tens of petabytes of data, and production-used machine learning. • You will become part of Allegro Tech - We speak at industry conferences, cooperate with tech communities, run our own blog (it's been over 10 years!), record podcasts, lead guilds, and we organize our own internal conference - the Allegro Tech Meeting. • We create solutions we love (and can) to talk about! Send us your CV and... see you at Allegro!
Konsultant AI
FUNDACJA ROZWOJU PRZEMYSŁU
⚲ Kielce
7 500–9 500 zł brutto / mies.
Wymagania
- Python
Nasze wymagania: Praktyczna wiedza o narzędziach AI (ChatGPT, Copilot, Claude, low-code AI i inne narzędzia AI) Min. 2 lata doświadczenia we wdrażaniu rozwiązań IT lub AI w firmach Umiejętność prowadzenia warsztatów i prezentacji przed grupą Komunikatywność — tłumaczysz technologię na język biznesu Doświadczenie w pracy z sektorem MŚP Prawo jazdy kat. B Mile widziane: Znajomość rynku świętokrzyskiego Wykształcenie wyższe (IT, zarządzanie, inżynieria lub pokrewne) Doświadczenie w pisaniu promptów, tworzeniu zespołów agentów AI i automatyzacji procesów biznesowych Język angielski min. B2 Zakres obowiązków: • Prowadzisz cykliczne warsztaty AI dla właścicieli i kadry zarządzającej MŚP (grupy 15–30 osób) • Realizujesz audyty dojrzałości cyfrowej i Quick Scan AI — diagnozujesz gotowość firmy na transformację cyfrową i AI, przygotowujesz raporty z rekomendacjami • Opracowujesz plany wdrożeniowe AI: mapy drogowe, dobór narzędzi, harmonogramy, budżety • Wspierasz klientów na etapie wdrożenia — koordynujesz pracę z dostawcami technologii • Budujesz relacje z klientami poprzez warsztaty i networking — nie cold calling • Współtworzysz treści eksperckie: artykuły, case studies, posty LinkedIn • Monitorujesz trendy AI i testujesz nowe narzędzia pod kątem przydatności dla MŚP — budujesz bazę wiedzy i metodologię FURP • Reprezentujesz FURP na wydarzeniach branżowych, targach i konferencjach Oferujemy: • Stabilną umowę o pracę w akredytowanym ośrodku innowacji • Realny wpływ na kształt organizacji — szybkie decyzje, brak korporacyjnej biurokracji • Różnorodne projekty — od przemysłu po startupy technologiczne, każda firma to inne wyzwanie • Budżet szkoleniowy: szkolenia, certyfikacje AI, studia podyplomowe • Udział w międzynarodowych wydarzeniach i europejskiej sieci EDIH — kontakt z hubami innowacji z całej Europy • Elastyczny model pracy: biuro Kielce + wizyty u klientów + praca zdalna • Ryczałt za dojazdy do klientów • System premiowy powiązany z wynikami sprzedaży (premie kwartalne do 20% wynagrodzenia podstawowego)
IT Analyst Senior
emagine Polska
⚲ Gdynia
Wymagania
- Business Analysis
- Documentation
- API (Application Programming Interface)
- Atlassian JIRA
- Operations
- Atlassian Confluence
- REST
- Project Management
- Agile
- SOAP
• banking • B2B up to 140zł/h netto+VAT • Location Gdańsk/Warsaw/Gdynia (hybrid 3 days per week in the office) • Start March Introduction & Summary We are seeking a highly skilled IT Analyst, specializing in business analysis and IT consultancy. The ideal candidate will possess extensive experience in IT and business analysis, with a proven ability to communicate effectively in both written and spoken English. Proficiency in tools such as Jira and Confluence, along with a solid understanding of API development, Data Mapping, and Process Modelling, is essential for this role. Main Responsibilities • Collaborate with business stakeholders to understand and document business strategies. • Elicit functional and non-functional requirements. • Design and propose sustainable solutions in alignment with the target picture. • Document requirements and maintain comprehensive records. • Facilitate communication between technical teams and business units. Key Requirements • Experience in IT and/or business analysis • Excellent written and spoken English communication skills • Familiarity with international collaboration • Experience using Jira and Confluence • Knowledge of API development, Data Mapping, and Process Modelling Nice to Have • Familiarity with regulatory requirements in the banking sector • Practical knowledge of REST/SOAP API • Knowledge of PSD2, PSD3/PSR, and NFR
Global IT Service Manager | f/m/d
ERGO Technology & Services
⚲ Warszawa, Gdańsk
Wymagania
- SQL Server
- Oracle
- AWS RDS
- ITIL
- Azure SQL
About Us ERGO Technology & Services S.A. (ET&S S.A.) was established in January 2021 following the integration of ERGO Digital IT and Atena into one entity, leveraging both companies’ strengths and best practices. As a part of ERGO Technology & Services Management AG, the technology holding of ERGO Group AG, we support millions of internal and external customers with state-of-the-art IT solutions to everyday problems. In October 2022, ET&S S.A. expanded its scope of operations by creating a Business Services unit to contribute in a new way to the growth of ERGO’s business. Acting as a co-partner and internal consultant, it adds non-IT value and supports the development of the entire ERGO Group, currently offering skills in reporting, analysis, actuarial, and input management. We are committed to fostering innovation and meeting the evolving needs of our clients worldwide. Discover how we implement AI, IoT, Voice Recognition, Big Data science, advanced mobile solutions, and business-related services to anticipate and address our customers’ future needs. How you will get the job done • owning the global Database Service (e.g., Oracle, SQL Server) • defining the database service strategy, roadmap, standards, and KPIs in line with business and IT strategy • overseeing the day-to-day operations of database services across all regions and time zones • overseeing backup/restore strategies, DR/BCP, and capacity planning • ensuring compliance with SLAs for availability, performance, incident response, and request fulfilment • defining support models (L1–L3), escalation paths, and handover processes • managing the service catalog entries, SLAs, OLAs, and pricing • coordinating and prioritizing incident, problem, and change management activities (ITIL-based) • driving root-cause analysis (RCA) and preventive actions for major incidents • tracking and optimizing costs (licenses, cloud consumption, hardware, support contracts) • leading and coordinating internal DBAs and service teams (including offshore/nearshore resources) • managing external vendors and cloud providers delivering database-related services • identifying and driving service improvements (automation, self-service, performance tuning) • collaborating with architects to define database architectures, patterns, and reference designs • monitoring and reporting service performance, risk, and improvement plans to leadership • supporting or/and leading database-related projects, migrations, and transformation initiatives (e.g., cloud migration, consolidation) Skills and experience you will need • fluency in English • bachelor’s degree in Computer Science, Information Technology, or related field (or equivalent experience) • 5+ years of experience in database administration/engineering or infrastructure operations • 3–5+ years in a service management, team lead, or global operations role. • strong experience with major RDBMS platforms (e.g., Oracle, SQL Server) and/or cloud databases (AWS RDS, Azure SQL, GCP Cloud SQL) • solid understanding of ITIL processes (Incident, Problem, Change, Capacity, Availability) • experience working in a global, multi-time-zone, multi-vendor environment • strong service management mindset and customer orientation • excellent communication skills with both technical and business stakeholders Nice to have • ITIL Foundation or higher certification • cloud certifications (AWS/Azure/GCP) related to databases or architecture • experience in regulated industries (finance, pharma, insurance, etc.) Perks & Benefits Let's be healthy Medical package, sports card, and numerous sports sections – these are some of the benefits that help our employees stay in good shape. Let's be balanced Work-life balance is a key aspect of a healthy workplace. We offer our employees flexible working hours, a confidential employee assistant program, as well as the possibility of remote working. However, staying at home with our in-office gaming room and dog-friendly office in Warsaw won’t be easy. Let's be smart We organize numerous workshops and training courses. Thanks to hackathons and meetups, our specialists share their expertise with others. Additionally, we have a wide range of digital learning platforms and language courses. Let's be responsible Each year, we participate in several CSR activities, during which, together with our colleagues, we do our best to create a better future. Let's be fun Company-wide bike races and soccer matches, film marathons in our cinema room or other engaging team-building activities – we got it covered! Let's be diverse Every team member is valued, regardless of gender, nationality, religious beliefs, disability, age, and sexual orientation or identity. Your qualifications, experience, and mindset are our greatest benefit!
ETL Engineer
SCALO
⚲ Poznań
18 480 - 23 520 PLN netto (B2B)
Wymagania
- SQL
- PostgreSQL
- Pentaho
- Oracle
- Teradata
- VBA
W Scalo zajmujemy się dostarczaniem projektów software'owych i wspieraniem naszych partnerów w rozwijaniu ich biznesu. Tworzymy oprogramowanie, które umożliwia ludziom dokonywanie zmian, działanie w szybszym tempie oraz osiąganie lepszych rezultatów. Jesteśmy firmą, która wykorzystuje szerokie spektrum usług IT, żeby pomagać klientom. Obszary naszego działania to m.in.: doradztwo technologiczne, tworzenie oprogramowania, systemy wbudowane, rozwiązania chmurowe, zarządzanie danymi, dedykowane zespoły developerskie. Cześć! U nas znajdziesz to, czego szukasz - przekonaj się! W Scalo czeka na Ciebie: • uczestnictwo w projekcie dotyczącym budowy i rozwoju kluczowych systemów odpowiedzialnych za analizę i ocenę ryzyka oraz za tworzenie jednolitego, spójnego obrazu klienta w skali całego banku. Zespół, do którego dołączy ETL Developer, działa na pograniczu IT i biznesu i współpracuje z Ekspertami od Danych, Architektami, Analitykami i jednostkami ryzyka, tym samym łącząc świat wymagań biznesowych z technologią i silnikami przetwarzania, • rozwój i utrzymanie procesów przetwarzania danych (ETL), • tworzenie i optymalizacja potoków danych w środowisku hurtowni, • przygotowywanie założeń i specyfikacji technicznych dla Developerów (np. w Teradata), • praca z grafami Ab Initio, narzędziami klasy ETL (Pentaho, Hitachi), • wykorzystanie kolejek MQ, scheduler'ów oraz systemów integracji danych, • udział w projektowaniu i rozwijaniu systemów do oceny ryzyka, w tym rozwiązań wykorzystujących Machine Learning, • współpraca z zespołami biznesowymi w celu tłumaczenia wymagań biznesowych na SQL, przepływy danych i logikę ETL, • stawka do 140 zł/h przy B2B w zależności od doświadczenia. Ta oferta jest dla Ciebie, jeśli: • posiadasz 5+ lat doświadczenia komercyjnego w analizie danych oraz możesz pochwalić się bardzo dobrą znajomością technologii bazodanowych i ETL‑owych, • bardzo dobrze znasz SQL (potrafisz pisać złożone zapytania, joiny, inner joiny, operować na dużych zbiorach danych i hurtowniach), • masz doświadczenie z narzędziami typu Ab Initio, Pentaho lub Hitachi oraz procesami ETL, • znasz środowiska bazodanowe PostgreSQL, Oracle, Teradata, • potrafisz programować w VBA lub innym języku skryptowym, • dobrze radzisz sobie z wielowątkową pracą i potrafisz organizować zadania sobie oraz innym, • dbasz o jakość dostarczanych analiz, • łatwo nawiązujesz relacje zarówno z IT, jak i z biznesem, • Twój poziom języka angielskiego pozwala na swobodną komunikację, • nie boisz się narzędzi AI i jesteś otwart_ na rozwój w kierunku łączenia klasycznego ETL z nowymi technologiami analitycznymi. Mile widziane: • doświadczenie w analizie systemów ryzyka lub CRM, • praktyczna znajomość Ab Initio Graphs i przetwarzania Big Data, • doświadczenie z MQ, schedulerami, kolejkami systemowymi, • znajomość zagadnień Machine Learning i wykorzystania ich w procesach scoringowych, • doświadczenie w pracy na styku biznes oraz IT, w środowisku dużej organizacji, • umiejętność projektowania przepływów danych i mapowań w hurtowniach. Co dla Ciebie mamy? • Stabilną współpracę i różnorodne projekty (Software, Embedded, Data, Cloud Services), • możliwość rozwoju kompetencji jako Ambasador Scalo - doskonal skille techniczne i presalesowe w społeczności naszych ekspertów, wpływaj na rozwój organizacji, wspieraj innych, • przestrzeń do budowania marki osobistej - twórz wartościowe treści, dziel się wiedzą i doświadczeniem, występuj jako prelegent na wydarzeniach (np. ProMeet), • prywatną opiekę medyczną - Luxmed, • kafeteryjny system benefitów - Motivizer, • dostęp do karty Multisport, • program poleceń pracowniczych z nagrodami, • integracje i wydarzenia firmowe.
Data Solution Architect
B2B.NET S.A.
⚲ Gdańsk, Gdynia, Warszawa
Wymagania
- AWS
- Snowflake Data Cloud
Nasze wymagania: Minimum 5-letnie doświadczenie jako Solution Architect lub Data Architect w środowisku enterprise Doświadczenie w projektowaniu architektury dużych systemów (high-scale, distributed systems) Bardzo dobra znajomość architektury danych, Big Data, ETL oraz SQL Doświadczenie z platformą Snowflake (hands-on) Znajomość usług AWS (data, storage, compute, orchestration) Doświadczenie w projektowaniu i implementacji pipeline’ów ETL Umiejętność pracy ze stakeholderami biznesowymi (PO, SME) Doświadczenie z CI/CD w kontekście rozwiązań data Znajomość integracji API, event-driven oraz messaging Mile widziane: Doświadczenie z ekosystemem Cloudera / Hadoop Doświadczenie w orkiestracji pipeline’ów (np. Airflow, AWS Step Functions) Znajomość Machine Learning (modele, pipeline’y, integracja) Doświadczenie w sektorze finansowym (np. Credit Risk) O projekcie: Poszukujemy doświadczonej osoby na stanowisko Data Solution Architect, która będzie odpowiedzialna za projektowanie, implementację oraz optymalizację architektury danych w dużych systemach. Idealny/a kandydat/ka będzie miał/a doświadczenie w pracy z rozwiązaniami opartymi na Snowflake i AWS, a także w projektowaniu rozwiązań dla rozbudowanych platform danych. Zakres obowiązków: Projektowanie i implementacja architektury rozwiązania (HLD + LLD) Translacja wymagań biznesowych na rozwiązania techniczne Wsparcie zespołów delivery w implementacji architektury Współpraca z Product Ownerami oraz SME (Subject Matter Experts) Tworzenie i utrzymanie dokumentacji architektonicznej Definiowanie standardów architektonicznych i najlepszych praktyk Udział w decyzjach technologicznych Optymalizacja pipeline'ów ETL i architektury danych
Data Solution Architect
Jit Team
⚲ Gdańsk, Gdynia, Warszawa, Łódź
25 000 - 29 000 PLN netto (B2B)
Wymagania
- AWS
- Machine Learning
- ETL
- Snowflake
- SQL
Salary: 1200 - 1400 PLN/day on B2B Work model: elastic hybrid from Gdynia / Gdańsk / Warszawa / Łódź (at least 2-3 days per week from the office) Why choose this offer? • You can expect a flexible work organization • The international work environment will give you the opportunity to interact with the English language on a daily basis • Scandinavian organizational culture will provide you with work-life balance, you will gain time for additional training (financed by Jit) • The Jit community will bring you a nice time during regular integration meetings Project You will be involved in a financial project within the banking sector, focused on Credit Risk and Information Capability, working in a complex, enterprise-scale environment. The role combines Solution Architecture and Data Architecture responsibilities, supporting the design and delivery of robust, scalable, and secure solutions. Responsibilities you'll have • Translate functional and non-functional requirements into high-level solution designs • Define detailed solution designs across all levels and provide guidance to the project team • Facilitate work on business requirements with Product Owners, Process Owners, and SMEs • Collaborate closely with stakeholders, architects, and project team members • Create and review documentation from both business and technical architecture perspectives • Handle processes, artifacts, and documentation describing solution architecture Expected competences and knowledge • Experience working with architecture and technology implementation of large-scale enterprise solutions • Understanding of modern application stacks, including open-source and third-party vendor technologies • Experience working with AWS Cloud, including AWS services and development tooling • Solid experience with Cloudera • Experience with integration architecture and common integration patterns supporting batch and real-time processing • Understanding of data and information architecture, including data modeling and data management technologies • Knowledge or practical experience in Machine Learning • Knowledge of security requirements and experience implementing appropriate controls within solutions • Experience in architecture modeling using tools, notations, and best practices • Strong communication skills in English • Business acumen and understanding of banking domains, especially credit risk and credit portfolio management • Nice to have: Hands-on experience with cloud data warehouses (Snowflake) • Experience developing ETL pipelines using Snowflake • Experience setting up CI/CD processes for ETL pipelines in Snowflake • Experience orchestrating ETL pipelines Technologies you'll work with • AWS • Machine Learning • Cloudera • Snowflake • ETL • Big Data Client – why choose this particular client from the Jit portfolio? Jit Team has had an over-decade-long relationship with the leading financial group in the Nordic countries, and we are privileged to be our client's premier partner in Poland. At present, over 200 Jit personnel are engaged in the completion of more than 60 projects for this Norwegian major provider of financial services with a global presence and a strong focus on modern technology. Our customer's work atmosphere is epitomized by the Scandinavian culture, which is conducive to people who place emphasis on work-life balance and feedback culture. Furthermore, all projects are executed in international teams, giving constant exposure to the English language. About Jit Team The Human factor of IT - it's not just a slogan, it's a philosophy. The foundation of Jit Team is people, which is why we prioritise you. We employ over 700 experienced experts. We create highly specialised teams for clients from all over the world. We offer team members developmental projects, a wide range of benefits and a proprietary professional development programme. Behind our maxim are also charitable and educational activities. We support pupils and students by donating learning equipment. We offer internships to help launch careers in IT. We support water rescuers and hospitals by providing the necessary equipment. We are a Polish company and we share what we have achieved over 15 years of activity. By supporting indigenous initiatives, we ensure the circulation of good energy.
Data Scientist (Risk)
Revolut
⚲ Kraków, Warszawa, Wrocław, Rzeszów
30 900 - 34 400 PLN brutto (UoP)
Wymagania
- SQL
- Python
About Revolut People deserve more from their money. More visibility, more control, and more freedom. Since 2015, Revolut has been on a mission to deliver just that. Our powerhouse of products — including spending, saving, investing, exchanging, travelling, and more — help our 70+ million customers get more from their money every day. As we continue our lightning-fast growth, 2 things are essential to our success: our people and our culture. In recognition of our outstanding employee experience, we've been certified as a Great Place to Work™. So far, we have 13,000+ people working around the world, from our offices and remotely, to help us achieve our mission. And we're looking for more brilliant people. People who love building great products, redefining success, and turning the complexity of a chaotic world into the simplicity of a beautiful solution. About the role Our Data Science team solves complex problems with smart, practical solutions. Data Scientists and Analysts work directly with product teams to uncover insights, guide decisions, and improve how customers experience Revolut. They do this by delivering smart, scalable solutions that move the business forward. We’re looking for a Data Scientist who'll help us with effective assessment and management of risk by developing policies, methodologies, models, and systems for risk quantification, reporting, and monitoring. Up to shape what's next in finance? Let's get in touch. What you’ll be doing • Developing and maintaining methodologies and policies for crypto risk modelling • Delivering real impact to the product through rigorous data-driven solutions • Building, enhancing, and maintaining the core risk engine, including margin models, leverage frameworks, and liquidation logic • Developing and calibrating risk models across isolated/cross margin, partial/full liquidation, bankruptcy pricing, and future portfolio margining capabilities • Collaborating with Engineering to deploy risk methodologies into production systems while iteratively improving models based on market conditions • Preparing documentation for compliance, and presenting risk insights, exposures, and recommendations to senior leadership What you'll need • A degree in mathematics, statistics, financial engineering, machine learning, computer science, or quantitative finance • 3+ years of experience in quantitative risk, financial engineering, or similar roles within trading, crypto, or fintech environments • Knowledge of margin trading mechanics, derivatives pricing models, and liquidation/liquidity risk management • Experience building and validating risk and margin models (VaR, Expected Shortfall, SPAN, CCP-style IM/VM, stress tests, scenario simulations) • Proficiency in quantitative modelling and time-series methods (GARCH, ARIMA, stochastic calculus), with advanced Python + SQL skills for large-scale data analysis • Familiarity with high-frequency datasets, and the ability to work well in fast-paced, real-time risk environments Nice to have • Experience with crypto derivatives and crypto market microstructure • Exposure to market-making workflows and risks (inventory management, P&L drivers, spread optimisation, and internalisation strategies) • Familiarity with regulatory and institutional risk frameworks (Basel III/IV, FRTB, SA-CCR, CCP margin methodologies) Compensation range • Krakow: PLN30,900 - PLN34,400 gross monthly* • Poland: PLN30,900 - PLN34,400 gross monthly* • Other locations: Compensation will be discussed during the interview process *Final compensation will be determined based on the candidate's qualifications, skills, and previous experience Building a global financial super app isn’t enough. Our Revoluters are a priority, and that’s why in 2021 we launched our inaugural D&I Framework, designed to help us thrive and grow everyday. We're not just doing this because it's the right thing to do. We’re doing it because we know that seeking out diverse talent and creating an inclusive workplace is the way to create exceptional, innovative products and services for our customers. That’s why we encourage applications from people with diverse backgrounds and experiences to join this multicultural, hard-working team.
Data Scientist (NLP Deep Learning Engineer)
Revolut
⚲ Kraków, Warszawa, Wrocław, Rzeszów
20 300 - 45 800 PLN brutto (UoP)
Wymagania
- SQL
- Python
About Revolut People deserve more from their money. More visibility, more control, and more freedom. Since 2015, Revolut has been on a mission to deliver just that. Our powerhouse of products — including spending, saving, investing, exchanging, travelling, and more — help our 70+ million customers get more from their money every day. As we continue our lightning-fast growth, 2 things are essential to our success: our people and our culture. In recognition of our outstanding employee experience, we've been certified as a Great Place to Work™. So far, we have 13,000+ people working around the world, from our offices and remotely, to help us achieve our mission. And we're looking for more brilliant people. People who love building great products, redefining success, and turning the complexity of a chaotic world into the simplicity of a beautiful solution. About the role Our Data Science team solves complex problems with smart, practical solutions and improves how customers experience Revolut. Our Deep Learning Engineers are at the forefront of GenAI and LLM integration, building transformative products that range from user-facing tools to advanced process optimisation. We're looking for a Deep Learning Engineer to work with the most advanced LLMs available, developing solutions that have a tangible impact on millions of customers worldwide. You'll collaborate cross-functionally with Product Owners, Software Engineers, Data Analysts, and Operations Managers to deliver automated, scalable solutions that elevate and revolutionise customer interaction. Up to break barriers and shape what's next for the future of Revolut’s AI-driven capabilities? Let's get in touch. What you’ll be doing • Building AI-driven features from scratch, like personal assistants, chatbots, copilots, and more • Developing user-focused and backend features using deep learning • Delivering impactful, scalable, data-driven AI solutions • Collaborating with Product, Engineering, and Data teams to solve deep learning challenges • Integrating cutting-edge AI technologies to drive innovation at Revolut What you'll need • Experience in deep learning within natural language processing area and large language models • A bachelor's degree in a STEM major (mathematics, computer science, engineering) • Excellent knowledge of data science (Python, SQL) and production tools • A deep understanding of probability and statistics fundamentals • Big-picture thinking to correctly diagnose problems and productionise research • Excellent communication and collaboration skills to partner with Product Owners and business heads Nice to have • A master's or PhD in a quantitative discipline • Solid experience with additional programming languages, such as Java, Scala, C++ • Experience at a large tech company worth >$15B • School/university Olympic medal competitions in physics, maths, economics, or programming Compensation range • Krakow: PLN20,300 - PLN45,800 gross monthly* • Poland: PLN20,300 - PLN45,800 gross monthly* • Other locations: Compensation will be discussed during the interview process *Final compensation will be determined based on the candidate's qualifications, skills, and previous experience Building a global financial super app isn’t enough. Our Revoluters are a priority, and that’s why in 2021 we launched our inaugural D&I Framework, designed to help us thrive and grow everyday. We're not just doing this because it's the right thing to do. We’re doing it because we know that seeking out diverse talent and creating an inclusive workplace is the way to create exceptional, innovative products and services for our customers. That’s why we encourage applications from people with diverse backgrounds and experiences to join this multicultural, hard-working team.
Data Scientist
Grid Dynamics Poland
⚲ Warszawa, Wrocław, Kraków, Gdańsk
Wymagania
- Python
- Spark
- SQL
- Tableau
We are looking for a highly skilled Senior Data Scientist to join our global team of innovators. In this role, you will work at the heart of a massive digital products ecosystem, handling over 65,000 transactions per second. You won't just analyze data—you will formulate complex business problems into technical solutions, building predictive models that optimize customer experiences and drive financial literacy for millions of users worldwide. If you are passionate about turning massive datasets into actionable insights and deploying real-time AI solutions, this is the place for you. Responsibilities: • Problem Formulation & Innovation: Translate high-level business challenges into technical data problems. Build and scale predictive models to optimize revenue, data insights, and customer experience. • End-to-End AI Lifecycle: Lead the full product cycle—from requirement gathering and data analysis to real-time model deployment and continuous monitoring. • Data Transformation: Perform scalable data transformations on both structured and unstructured datasets to fuel your models. • Pipeline Automation: Design and implement automated methods for all stages of the predictive pipeline to minimize manual labor in both development and production environments. • Strategic Collaboration: Work closely with other data science colleagues and cross-functional teams to provide strategic recommendations based on your analyses. • Client Engagement: Act as a technical consultant for internal and external partners, presenting results and guiding critical thinking through data-driven insights. Min requirements: • Experience: 5+ years of professional experience in Data Science or AI Engineering roles. • Education: Advanced degree (Masters or PhD) or a Bachelor’s in an analytical field such as Statistics, Computer Science, or Economics. • Modeling Mastery: Deep understanding of techniques including Logistic Regression, Naïve Bayes, SVM, Decision Trees, and Neural Networks. • Big Data Tools: Proven ability to extract and aggregate data from large-scale datasets using SQL, Hive, or Spark. • Programming: Strong proficiency in Python, R, or Scala for large-scale data analysis. • ML & AI Tooling: Hands-on experience with machine learning packages and deploying models into production. • Visualization: Experience generating and visualizing insights using tools like Tableau. • Project Leadership: Track record of running projects from scoping to delivery, engaging effectively with stakeholders. Would be a plus: • Deep Learning: Hands-on experience with TensorFlow or PyTorch. • NLP: Experience with Natural Language Processing techniques. • Domain Expertise: Previous exposure to financial services, credit cards, or merchant analytics. • Engineering Alignment: Experience working within Scrum teams and a solid understanding of the Software Development Life Cycle (SDLC). • Cloud Infrastructure: Familiarity with AWS or GCP data solutions. We offer: • Opportunity to work on bleeding-edge projects • Work with a highly motivated and dedicated team • Competitive salary • Flexible schedule • Benefits package - medical insurance, sports • Corporate social events • Professional development opportunities • Well-equipped office About us: Grid Dynamics (NASDAQ: GDYN) is a leading provider of technology consulting, platform and product engineering, AI, and advanced analytics services. Fusing technical vision with business acumen, we solve the most pressing technical challenges and enable positive business outcomes for enterprise companies undergoing business transformation. A key differentiator for Grid Dynamics is our 8 years of experience and leadership in enterprise AI, supported by profound expertise and ongoing investment in data, analytics, cloud & DevOps, application modernization and customer experience. Founded in 2006, Grid Dynamics is headquartered in Silicon Valley with offices across the Americas, Europe, and India.
Data Scientist
Revolut
⚲ Kraków, Warszawa, Wrocław, Rzeszów
30 900 - 43 000 PLN brutto (UoP)
Wymagania
- SQL
- Python
About Revolut People deserve more from their money. More visibility, more control, and more freedom. Since 2015, Revolut has been on a mission to deliver just that. Our powerhouse of products — including spending, saving, investing, exchanging, travelling, and more — help our 70+ million customers get more from their money every day. As we continue our lightning-fast growth, 2 things are essential to our success: our people and our culture. In recognition of our outstanding employee experience, we've been certified as a Great Place to Work™. So far, we have 13,000+ people working around the world, from our offices and remotely, to help us achieve our mission. And we're looking for more brilliant people. People who love building great products, redefining success, and turning the complexity of a chaotic world into the simplicity of a beautiful solution. About the role Our Data Science team solves complex problems with smart, practical solutions. Data Scientists and Analysts work directly with product teams to uncover insights, guide decisions, and improve how customers experience Revolut. They do this by delivering smart, scalable solutions that move the business forward. We’re looking for Data Scientists to help us shape the future of financial services. It’s a challenging task, but you’ll be part of a strong and supportive team You’ll be collaborating with experienced professionals in Product, Design, Data Science, and Engineering on impactful projects that drive our company’s growth Up to shape what's next in finance? Let's get in touch. What you’ll be doing • Improving existing algorithms • Building new PoCs • Delivering data-driven solutions to enhance product performance • Researching and developing PoCs into scalable data products • Collaborating with product owners, engineers and data scientists to continually solve complex data problems What you'll need • Experience building and leading a data science team within a scaling organisation • A bachelor's degree in a STEM subject (mathematics, computer science, engineering) • Solid knowledge of data science tools, including Python, SQL, and production frameworks • Demonstrated knowledge of probability and statistics fundamentals • The ability to think strategically and diagnose problems accurately • Experience productionising research into scalable solutions • Excellent communication and collaboration skills to partner with Product Owners and senior stakeholders Nice to have • An advanced degree in a quantitative discipline (e.g., master's or PhD) • Experience with additional programming languages, such as Java, Scal, or C++ • Familiarity with anti-fraud functions • Experience working at a large-scale global tech company • To have participated in Olympiad competitions in physics, mathematics, economics, or programming at school or university level Compensation range • Krakow: PLN30,900 - PLN43,000 gross monthly* • Poland: PLN30,900 - PLN43,000 gross monthly* • Other locations: Compensation will be discussed during the interview process *Final compensation will be determined based on the candidate's qualifications, skills, and previous experience Building a global financial super app isn’t enough. Our Revoluters are a priority, and that’s why in 2021 we launched our inaugural D&I Framework, designed to help us thrive and grow everyday. We're not just doing this because it's the right thing to do. We’re doing it because we know that seeking out diverse talent and creating an inclusive workplace is the way to create exceptional, innovative products and services for our customers. That’s why we encourage applications from people with diverse backgrounds and experiences to join this multicultural, hard-working team.
Data Scientist
IN4GE sp. z o.o.
⚲ Kraków, Warszawa, Wrocław
130–170 zł netto (+ VAT) / godz.
Wymagania
- Python
- SQL
- NumPy
- scikit-learn
- AWS
- Google Cloud Platform
- Microsoft Azure
Nasze wymagania: Minimum 2–4 lata doświadczenia komercyjnego w obszarze Data Science, analityki danych lub pokrewnym. Bardzo dobra znajomość Python oraz SQL. Znajomość bibliotek analitycznych i ML, takich jak pandas, NumPy, scikit-learn lub pokrewne. Umiejętność analizy danych, budowy modeli oraz interpretacji wyników. Zrozumienie zagadnień statystycznych i modelowania danych. Umiejętność komunikowania wniosków w sposób zrozumiały dla interesariuszy biznesowych. Znajomość języka angielskiego na poziomie minimum B2. Mile widziane: Doświadczenie z chmurą publiczną, taką jak AWS, GCP lub Azure. Znajomość narzędzi BI lub data visualization. Doświadczenie z NLP, rekomendacjami lub predykcją. Znajomość zagadnień MLOps lub wdrożeń modeli. O projekcie: Dla naszego Klienta poszukujemy osoby do współpracy na stanowisku Data Scientist przy projektach realizowanych w nowoczesnym środowisku technologicznym. To rola dla osoby, która chce wykorzystywać dane do budowy modeli, analiz oraz rozwiązań wspierających decyzje biznesowe. Zakres współpracy obejmuje zarówno analizę danych i rozwój modeli predykcyjnych, jak i bliską współpracę z interesariuszami biznesowymi oraz zespołami technologicznymi. Poszukujemy osoby, która potrafi łączyć podejście analityczne z rozumieniem potrzeb biznesowych i chce uczestniczyć w realizacji projektów o realnym wpływie na organizację. Zakres obowiązków: Analiza danych oraz identyfikowanie zależności, trendów i obszarów do optymalizacji. Budowa, rozwój i walidacja modeli statystycznych oraz Machine Learning. Przygotowywanie danych na potrzeby analiz i modeli. Współpraca z zespołami biznesowymi i produktowymi w celu przekładania potrzeb na rozwiązania analityczne. Tworzenie rekomendacji wspierających podejmowanie decyzji biznesowych. Udział w prezentowaniu wyników analiz oraz wniosków dla interesariuszy. Współpraca z zespołami technologicznymi przy wdrażaniu wypracowanych rozwiązań. Oferujemy: Współpracę w modelu B2B. Stawkę w przedziale 130–170 PLN/h netto + VAT, w zależności od doświadczenia i dopasowania do projektu. Możliwość udziału w projektach realizowanych w środowisku międzynarodowym. Współpracę z zespołami stawiającymi na wymianę wiedzy i partnerską komunikację. Realny wpływ na rozwój rozwiązań analitycznych oraz biznesowych. Model współpracy hybrydowy w jednej z lokalizacji: Wrocław, Kraków, Warszawa.
Data Engineer / DataOps Engineer
emagine Polska
⚲ Warsaw
Wymagania
- Microsoft Platform
- Machine Learning (ML)
- Git
- DataStage (ETL)
- SQL
- Testing
- Cloud
- ETL
- Microsoft Azure
- CI/CD
🌍Remote work: fully remote. 📑Assignment type: B2B. 📕Project language: English. ⏳Project length: > 12 months + prolongations. ⏰Start: ASAP / 1 month. 💻Workload: full time. ⚙️Recruitment process: 2 interviews with the client. 💼 Industry: IT Services / Digital Consulting 🔍Additional information: After receiving the offer, a background check is carried out (references, criminal record check, etc.). Summary: The primary purpose of the Data Engineer / DataOps Engineer role is to develop and manage data pipelines and workflows, enhancing the organization's data platform capabilities. This position is crucial for ensuring that data is processed reliably and efficiently, enabling better decision-making across the company. Responsibilities: • Develop and maintain end-to-end data pipelines using Snowflake as the core data platform. • Build ELT workflows using dbt and manage orchestration with Airflow. • Implement and support DataOps processes, including CI/CD automation, monitoring, and workload deployment on Kubernetes. • Optimize Snowflake performance, including warehouses, storage usage, and query efficiency. • Ensure data reliability through data validation, testing, and monitoring practices. • Integrate various data sources and manage ingestion processes into Snowflake. • Collaborate with cross-functional teams to deliver reliable, production-ready data solutions. • Follow engineering best practices, maintain coding standards, and support continuous improvement. • Support team knowledge sharing and mentor junior developers when needed. Key Requirements: • 5+ years of professional practice in data engineering. • Strong, practical experience with Snowflake (views, tables, performance tuning, orchestrated ELT processes). • Solid expertise using dbt for SQL-based transformations. • Hands-on experience with Airflow for workflow scheduling and automation. • Experience deploying and maintaining containerized workloads on Kubernetes. • Familiarity with cloud environments, with strong understanding of Microsoft Azure services. • Practical experience building ETL/ELT pipelines and maintaining production data workflows. • Good understanding of Git-based development, CI/CD pipelines, and general DevOps principles. • Analytical mindset and ability to troubleshoot issues in complex systems. Nice to Have: • Experience with event streaming or messaging systems. • Familiarity with data quality tools. • Exposure to observability or platform engineering tooling. • Understanding of MLOps concepts or ML workflow integration.
Data Analyst (Product)
Revolut
⚲ Kraków, Warszawa, Wrocław, Rzeszów
12 600 - 31 500 PLN brutto (UoP)
Wymagania
- SQL
- Python
About Revolut People deserve more from their money. More visibility, more control, and more freedom. Since 2015, Revolut has been on a mission to deliver just that. Our powerhouse of products — including spending, saving, investing, exchanging, travelling, and more — help our 70+ million customers get more from their money every day. As we continue our lightning-fast growth, 2 things are essential to our success: our people and our culture. In recognition of our outstanding employee experience, we've been certified as a Great Place to Work™. So far, we have 13,000+ people working around the world, from our offices and remotely, to help us achieve our mission. And we're looking for more brilliant people. People who love building great products, redefining success, and turning the complexity of a chaotic world into the simplicity of a beautiful solution. About the role Our Data Science team solves complex problems with smart, practical solutions. Data Analysts and Scientists work directly with product teams to uncover insights, guide decisions, and improve how customers experience Revolut. They do this by delivering smart, scalable solutions that move the business forward. We’re looking for a talented Data Analyst to dig into our complex databases, look for root causes of a problem, design your own solutions, and write your own code to implement them. We aim for you to never stop learning, picking up new skills and delivering value along the way. Up to shape what's next in finance? Let's get in touch. What you’ll be doing • Understanding our business and its processes through our data • Applying this understanding and knowledge of data to help product and service teams • Designing key metrics to measure different aspects of the business • Creating and maintaining new aggregated views and tables to simplify data querying • Maintaining and creating new dashboards to track metrics and visualise insights • Promoting data literacy across the company, organising and holding workshops What you'll need • Previous experience in an analytical role, creating impactful solutions • A background/education in a quantitative discipline • 5+ years of experience with SQL • Great skills with Python or other programming languages • Evidence of solid mathematical and statistics knowledge Compensation range • Krakow: PLN12,600 - PLN31,500 gross monthly* • Poland: PLN12,600 - PLN31,500 gross monthly* • Other locations: Compensation will be discussed during the interview process *Final compensation will be determined based on the candidate's qualifications, skills, and previous experience Building a global financial super app isn’t enough. Our Revoluters are a priority, and that’s why in 2021 we launched our inaugural D&I Framework, designed to help us thrive and grow everyday. We're not just doing this because it's the right thing to do. We’re doing it because we know that seeking out diverse talent and creating an inclusive workplace is the way to create exceptional, innovative products and services for our customers. That’s why we encourage applications from people with diverse backgrounds and experiences to join this multicultural, hard-working team.
Data Analyst (Engineer)
Revolut
⚲ Kraków, Warszawa, Wrocław, Rzeszów
15 900 - 23 800 PLN brutto (UoP)
Wymagania
- SQL
- Python
About Revolut People deserve more from their money. More visibility, more control, and more freedom. Since 2015, Revolut has been on a mission to deliver just that. Our powerhouse of products — including spending, saving, investing, exchanging, travelling, and more — help our 70+ million customers get more from their money every day. As we continue our lightning-fast growth, 2 things are essential to our success: our people and our culture. In recognition of our outstanding employee experience, we've been certified as a Great Place to Work™. So far, we have 13,000+ people working around the world, from our offices and remotely, to help us achieve our mission. And we're looking for more brilliant people. People who love building great products, redefining success, and turning the complexity of a chaotic world into the simplicity of a beautiful solution. About the role Our Data Science team solves complex problems with smart, practical solutions. Data Scientists and Analysts work directly with product teams to uncover insights, guide decisions, and improve how customers experience Revolut. They do this by delivering smart, scalable solutions that move the business forward. We’re looking for a Data Analyst who can dig into our complex databases, look for the root cause of a problem, and design their own solutions by writing code to implement them. This is an opportunity to apply your skills while picking up new ones and delivering value for millions of customers around the world. Up to shape what's next in finance? Let's get in touch. What you’ll be doing • Understanding our business and its processes through data • Applying this understanding and knowledge of data to help product and services teams • Developing documentation and data governance • Owning the entire ETL process • Designing key metrics to measure different aspects of the business • Creating and maintaining new aggregated views and tables to simplify data querying • Providing clean data sets, modelling data in a way that empowers end users to answer their own questions What you'll need • 5+ years of experience in an analytical role, creating impactful data-driven solutions • A solid background/education in a quantitative discipline • Great skills with Python, SQL, or other programming languages • Evidence of exceptional mathematical and statistics knowledge Nice to have • An advanced degree in a core STEM subject • Experience with additional programming languages (Java, Scala, C++, etc.) • School/university Olympic medal competitions in physics, maths, economics, or programming Compensation range • Krakow: PLN15,900 - PLN23,800 gross monthly* • Poland: PLN15,900 - PLN23,800 gross monthly* • Other locations: Compensation will be discussed during the interview process *Final compensation will be determined based on the candidate's qualifications, skills, and previous experience Building a global financial super app isn’t enough. Our Revoluters are a priority, and that’s why in 2021 we launched our inaugural D&I Framework, designed to help us thrive and grow everyday. We're not just doing this because it's the right thing to do. We’re doing it because we know that seeking out diverse talent and creating an inclusive workplace is the way to create exceptional, innovative products and services for our customers. That’s why we encourage applications from people with diverse backgrounds and experiences to join this multicultural, hard-working team.
Data Analyst
GAMIVO S.A.
⚲ Poznań, Stare Miasto
8 000–12 000 zł / mies. (zal. od umowy)
Wymagania
- SQL
Nasze wymagania: Minimum 2 lata doświadczenia w obszarze związanym z analizą danych Bardzo dobra znajomość języka SQL (warunek konieczny) Doświadczenie w pracy z pakietem analitycznym Google: GA4, Looker Studio, BigQuery (warunek konieczny) Wysokie zdolności analityczne i umiejętność wyciągania wniosków biznesowych Znajomość języka angielskiego na poziomie min. B2 Mile widziane: Doświadczenie i znajomość branży e-commerce Znajomość zagadnień marketingu internetowego Zakres obowiązków: Tworzenie wizualizacji danych, raportów i dashboardów Analiza oraz ocena efektywności i możliwości obszarów biznesowych firmy Samodzielne opracowywanie i wdrażanie nowych raportów lub zmiana istniejących Rozwój narzędzi do analizy i raportowania Współpraca z zespołem zarządzającym oraz marketingu Oferujemy: Możliwość pracy zdalnej lub hybrydowej Zniżki na zakup gier, kart podarunkowych (iTunes, Netflix, Zalando, Amazon itp) oraz innych produktów dostępnych na platformie GAMIVO :) Kafeterię myBenefit– niezależnie od tego czy jesteś fanem sportu, czy miłośnikiem kultury, z pewnością znajdziesz idealny benefit dla siebie Budżet szkoleniowy Dofinansowanie do karty multisport Dofinansowanie do prywatnej opieki medycznej Dofinansowanie do ubezpieczenia na życie Gry w pracy podczas przerwy lub po godzinach ze współpracownikami
AI Native Engineer | Branża Logistyczna
Edge One Solutions Sp. z o.o.
⚲ Warszawa
Wymagania
- ChatGPT
- Claude
- Gemini
- Zapier
- Make
- n8n
Nasze wymagania: Doświadczenie w pracy z modelami AI (LLM), takimi jak ChatGPT, Claude, Gemini lub podobnymi, w codziennych zadaniach Praktyczna umiejętność projektowania promptów oraz budowania skutecznych interakcji z modelami Doświadczenie w tworzeniu workflowów opartych o AI (np. automatyzacje, pipeline’y, agenci) Znajomość narzędzi do automatyzacji i orkiestracji procesów (np. Zapier, Make, n8n lub podobne) Umiejętność szybkiego prototypowania i iteracyjnego budowania rozwiązań Zrozumienie ograniczeń AI oraz umiejętność projektowania rozwiązań odpornych na błędy modeli Doświadczenie w pracy z danymi (analiza, przetwarzanie, interpretacja wyników generowanych przez AI) Umiejętność podejmowania decyzji, kiedy użyć AI, a kiedy zastosować rozwiązanie manualne lub klasyczne Otwartość na eksperymentowanie i testowanie nowych podejść oraz narzędzi Znajomość języka angielskiego umożliwiająca pracę z dokumentacją i narzędziami O projekcie: Nasz klient to innowacyjna, europejska firma dostarczająca rozwiązania out-of-home delivery, znana z sieci inteligentnych skrytek w wielu krajach. Dzięki rozbudowanej infrastrukturze technologicznej, umożliwia szybkie, wygodne i ekologiczne odbieranie przesyłek. Jako spółka giełdowa (o kapitalizacji rynkowej ~5 mld USD w marcu 2023) i zespół liczący ponad 10 000 pracowników, to jeden z największych operatorów logistycznych w Europie, stale inwestujący w rozwój technologii oraz automatyzacji. O projekcie Projekt skupia się na budowaniu rozwiązań w podejściu AI-native, gdzie AI stanowi fundament sposobu pracy, a nie tylko jego wsparcie. Obejmuje tworzenie agentów, workflowów i automatyzacji zdolnych do realizacji złożonych procesów biznesowych. Praca odbywa się w modelu szybkiego prototypowania i iteracji, z naciskiem na realne efekty i mierzalną wartość. Projekt ma charakter greenfield, co daje dużą swobodę w doborze rozwiązań i eksperymentowaniu. Kluczowa jest współpraca z zespołem inżynierskim, łącząca szybkość działania z jakością i skalowalnością. Zakres obowiązków: Projektowanie i wdrażanie workflowów AI rozwiązujących realne problemy biznesowe Budowa i rozwój agentów oraz pipeline’ów (analiza → decyzja → akcja) Szybkie prototypowanie rozwiązań oraz ich iteracyjne ulepszanie na podstawie wyników Integracja modeli AI z narzędziami, danymi oraz istniejącymi systemami Automatyzacja procesów w celu zwiększenia efektywności i redukcji pracy manualnej Identyfikacja i eliminowanie potencjalnych błędów oraz ograniczeń w działaniu AI Dokumentowanie stworzonych rozwiązań w sposób umożliwiający ich dalsze wykorzystanie przez zespół Współpraca z zespołami technicznymi w celu wdrażania i skalowania rozwiązań Testowanie nowych narzędzi i podejść oraz proponowanie usprawnień Budowanie rozwiązań o wysokim stosunku efektu do włożonego wysiłku (output-to-effort ratio) Oferujemy: Zyskujesz indywidualne wsparcie Service Delivery Managera, który pomoże Ci zaplanować ścieżkę kariery i zadba o Twój komfort i satysfakcję z pracy w projekcie Szkolenia, certyfikaty i konferencje - dofinansowujemy lub w pełni pokrywamy koszty rozwoju kompetencji technicznych SmartChange – umożliwiamy zmianę projektu i dobieramy kolejny zgodnie z Twoimi preferencjami Dbamy o Twój work-life balance, dlatego poza pracą oferujemy wyjścia integracyjne, aktywności sportowe i inspirujące webinary #edge1talks, które rozwijają i łączą ludzi Aktywność fizyczna – wspieramy sportowe inicjatywy i wynajem sal treningowych Pakiet zdrowotny – prywatna opieka, karta sportowa, ubezpieczenie i wsparcie psychologiczne (HearMe) Elastyczne benefity – sam decydujesz, na co przeznaczasz punkty w naszym systemie benefitowym Bezpłatne lekcje języka angielskiego – dopasowane do Twojego poziomu, idealne do pracy w międzynarodowym środowisku
Unterstützung DWH, ab sofort, remote und FFM (m/w/d)
emagine Polska
⚲ Frankfurt am Main
Wymagania
- Deployment
- Business Intelligence (BI)
- Data Warehouse (DW)
- DataStage (ETL)
- business requirements
- ETL
- SQL
- Testing
Wir, die emagine GmbH, suchen einen erfahrenen DWH Spezialist (m/w/d). Start: ASAPEinsatzort: remote und FFMAuslastung: VollzeitLaufzeit: bis Jahresende mit der Option auf Verlängerung Ihre Aufgaben: • Analyse, Design und Realisierung von Business Anforderungen im BI- und DWH-Umfeld, speziell ETL-Prozesse. • Analyse und Gestaltung der Prozesse, Spezifikation und interaktive Erfassung von User Stories. • Unterstützung bei der Entwicklung und Test von ETL-Strecken. Ihr Profil: • Gute Kenntnisse DataWarehouse-Architekturen und -Technologien und deren Implementierung. • Mehrjährige Praxis mit Werkzeugen im DWH-Umfeld (ETL-Tools wie DataStage, Workflow-Management-Tools wie Automic sowie Automated Deployment Tools) und sehr gute SQL-Kenntnisse. • Idealerweise bereits Erfahrungen mit Oracle, Datastage, Automic oder der Modellierung von Data Vault 2.0. • Idealerweise bereits Erfahrung im DWH bzw. Finance / Regulatory Reporting Umfeld. • Sehr gute Deutsch- und Englischkenntnisse in Wort und Schrift.
Senior Snowflake Engineer/Architect with Data Security
Spyrosoft
⚲ Warszawa, Wrocław, Kraków
130 - 185 PLN/h netto (B2B)
Wymagania
- Snowflake
- Azure
- Terraform
- Bicep
- Snowflake Cortex
We are looking for an experienced Senior Snowflake Data Architect to join our growing data team. This role is ideal for someone who combines deep technical expertise in Snowflake and Azure with strong architectural thinking, security awareness, and excellent communication skills. Requirements: • 8+ years of experience in Data Engineering roles • 5+ years of hands-on experience with Snowflake and Azure • 2+ years of experience as a Snowflake Architect • Proven experience with Infrastructure as Code (Terraform or Bicep) • Strong understanding of Azure Cloud architecture • Deep knowledge of Snowflake security and data protection mechanisms • Advanced data modeling skills • Experience assessing and optimizing Snowflake platforms for security and scalabilit • Strong communication skills and ability to work with both technical and non-technical stakeholders Nice to Have • Experience in large-scale enterprise environments • Exposure to AI-driven data solutions beyond Snowflake Cortex • Certifications in Snowflake or Azure Main responsibilities: • Design, architect, and optimize scalable data solutions using Snowflake and Azure • Lead Snowflake architecture initiatives, ensuring performance, scalability, and security best practices • Conduct Snowflake Data Platform assessments, particularly in security and scalability areas • Implement Infrastructure as Code (IaC) solutions using Terraform or Bicep • Leverage Snowflake AI capabilities (e.g., Cortex AI) to enhance data solutions • Develop and maintain robust data models to support analytics and business intelligence • Define and enforce data security standards (access controls, encryption, auditability) • Collaborate with stakeholders, engineers, and business teams to translate requirements into technical solutions • Provide technical leadership and architectural guidance to data engineering teams Who will you work with? You will join a cross-functional team of data engineers, platform engineers, security specialists and product owners focused on delivering data solutions for energy and e‑mobility products. The team values collaboration, clear communication and a pragmatic approach to secure, scalable design. What will you do? • Architect secure Snowflake-based solutions and drive their implementation across projects. • Work with stakeholders to translate business requirements into secure data platform designs. • Define and enforce security and governance standards for data access, processing and sharing. • Participate in technical reviews, threat modelling and security assessments for data initiatives.
Senior ML/Data Scientist with focus on LLM
SQUARE ONE RESOURCES sp. z o.o.
⚲ Warszawa, Mokotów
120–160 zł netto (+ VAT) / godz.
Wymagania
- Python
- AI
- NLP
- Deep learning
- MLOps
- Machine Learning
- LLM
- Cloud solutions
- PyTorch
- TensorFlow
- R
- Keras
Nasze wymagania: Experience with LLM applications development in particular agentic design such as tool using and reasoning. Experience in building data pipelines and deployment pipelines for LLM applications. Recent experience with ML/AI toolkits such as AWS Sagemager (other toolkits like Pytorch, Tensorflow, Keras, MXNet, H20, etc are nice to have). Experience with MLOps technologies (Sagemaker, Vertex AI, Kubeflow). Experience with cloud solutions (AWS / Azure / GCP), docker. Proven scripting and automation skills. Good knowledge of: git, bash, linux, CI/CD tools (e.g. jenkins, gitlab CI), software lifecycle, RDB, visualization tools eg Tableau, Jira, confluence. Programming languages: Python, R. Test driven development, good coding practices. Problem-solving and decision-making skills. Customer & delivery focus. Ability to work effectively with team members and virtual teams from different locations and different cultural backgrounds. Mile widziane: Experience with LLM fine tuning a big plus. Experience with deployment of scalable apps a plus. Experience with clinical study data a plus. O projekcie: Project enables find, access, processing, publication and sharing of biomedical data to generate insight for secondary use. It also contains EDIS end to end engine for secondary use and primary exploration and integration with externally generated data (RWD) from both clinical and non-clinical sources. Zakres obowiązków: Design, develop, and deploy solutions based on large language models (LLM), including agent design and tool utilization for problem-solving. Fine-tune LLM models and optimize their performance to meet specific business requirements. Develop and optimize data pipelines and deployment pipelines for LLM-based applications. Integrate clinical, non-clinical, and external real-world data (RWD) from various sources. Work with ML/AI tools, including AWS SageMaker, PyTorch, TensorFlow, Vertex AI, and implement MLOps solutions using tools like Kubeflow. Create scripts and automate processes using tools like Git, Bash, Docker, and Kubernetes. Develop scalable applications in cloud environments (AWS, Azure, GCP). Implement Continuous Integration / Continuous Deployment (CI/CD) practices using tools like Jenkins or GitLab CI. Collaborate with teams across different locations and cultures to deliver customer-oriented solutions. Test and optimize ML models, manage training and testing datasets, and mitigate overfitting.
Senior Data Scientist (Ad-Tech / CTV / DSP)
Edge One Solutions Sp. z o.o
⚲ Kraków
Wymagania
- Data Science
- Python
- SQL
- DSP
- Machine Learning
O nas Społeczność Edge One Solutions powstała z miłości do nowych technologii. Liczymy ponad 350 ekspertów, a w Grupie Euvic, której jesteśmy częścią, jest nas już 5500. Indywidualne podejście pozwala na dopasowanie projektów do umiejętności, przez co realnie wspieramy się w rozwoju kompetencji. O kliencie Nasz klient dostarcza rozwiązania AI, które maksymalizują wpływ biznesowy i minimalizują ryzyko dla organizacji. Platforma i aplikacje integrują się z kluczowymi procesami biznesowymi, umożliwiając zespołom rozwijanie, wdrażanie oraz zarządzanie AI na dużą skalę. O roli Poszukujemy doświadczonego Senior Data Scientist, który dołączy do projektu w obszarze nowoczesnej platformy reklamowej CTV i wesprze rozwój rozwiązań z zakresu attribution, bidder optimization oraz audience intelligence. Tryb pracy Praca zdalna Realizację projektu ułatwi Ci • Minimum 7+ lat doświadczenia jako Data Scientist (silny fokus na ad-tech) • Doświadczenie w DSP / programmatic / CTV measurement • Praktyka w bidder optimization i modelach attribution • Bardzo dobra znajomość statystyki, ML, Python, SQL • Wykształcenie wyższe (preferowane MSc / PhD – CS, statystyka, ekonomia lub pokrewne) Mile widziane: • GCP / Vertex AI, feature stores, experiment tracking • Doświadczenie z LLM / agentic AI (szczególnie ewaluacja) • Umiejętność komunikacji z biznesem • Podejście researchowe i innowacyjne Twój wkład do projektu • Projektowanie i wdrażanie modeli ML dla bidder optimization, pricing, budget allocation i pacing • Budowa systemów orkiestracji modeli przy wielu KPI biznesowych • Rozwój modeli attribution i measurement (CTV) • Praca nad audience segmentation (first & third-party data) • Analiza efektywności kampanii i rekomendacje produktowe • Współpraca z Product & Engineering przy wdrożeniach produkcyjnych Tak o Ciebie zadbamy Dołączając do Edge One Solutions, możesz liczyć na kompleksowe wsparcie i realny rozwój – nie tylko zawodowy, ale i osobisty: • Indywidualne wsparcie Service Delivery Managera, który pomoże Ci zaplanować ścieżkę kariery i zadba o komfort w projekcie • Szkolenia, certyfikaty i konferencje - dofinansowujemy lub w pełni pokrywamy koszty rozwoju kompetencji technicznych • #SmartChange – umożliwiamy zmianę projektu i dobieramy kolejny zgodnie z Twoimi preferencjami • Nie tylko praca – wyjścia, sport i nasze webinary #edge1talks • Aktywność fizyczna – wspieramy sportowe inicjatywy i wynajem sal treningowych • Pakiet zdrowotny – prywatna opieka, karta sportowa, ubezpieczenie i wsparcie psychologiczne (HearMe) • Elastyczne benefity – sam decydujesz, na co przeznaczasz punkty w naszym systemie benefitowym • Bezpłatne lekcje języka angielskiego – dopasowane do Twojego poziomu, idealne do pracy w międzynarodowym środowisku Nie spełniasz wszystkich wymagań, a projekt Cię zainteresował? Nie wahaj się i aplikuj. Porozmawiajmy!
Senior Data Engineer (Azure Data Factory/Powershell) | f/m/d
ERGO Technology & Services S.A.
⚲ Gdańsk, Warszawa, Śródmieście
Wymagania
- CI/CD
- Azure Data Factory
Nasze wymagania: fluent spoken and written English expertise in writing and optimizing SQL code and good understanding of MS SQL internals hands-on experience with Azure Data Factory (ADF) pipelines proficiency in incremental and full data loads familiarity with CI/CD integration for data workflows knowledge of Databricks and Python for advanced data processing strong grasp of dimensional modeling and schema design experience implementing Change Data Capture (CDC) in production environments ability to configure and troubleshoot CI/CD pipelines for database deployments being skilled in pre- and post-deployment validations expertise in query optimization and resolving performance bottlenecks proficiency in debugging issues across staging and production environments familiarity with Active Directory (AD) group management knowledge of role-based access control integration experience with Entra ID for secure data access Zakres obowiązków: designing and implementing data models and architectures that support financial data management developing and maintaining ETL processes to integrate data from diverse sources, including market data feeds, customer information systems, and transactional databases ensuring data quality, security, and compliance with regulatory requirements collaborating with business owners, data analysts and scientists, software engineers, and DevOps specialists to support the development of predictive models and business intelligence tools optimizing data processing workflows for performance, scalability, and reliability Oferujemy: Let's be healthy – medical package, sports card, and numerous sports sections – these are some of the benefits that help our employees stay in good shape. Let's be balanced – work-life balance is a key aspect of a healthy workplace. We offer our employees flexible working hours, a confidential employee assistant program, as well as the possibility of remote working. However, staying at home with our in-office gaming room and dog-friendly office in Warsaw won’t be easy. Let's be smart – we organize numerous workshops and training courses. Thanks to hackathons and meetups, our specialists share their expertise with others. Additionally, we have a wide range of digital learning platforms and language courses. Let's be responsible – each year, we participate in several CSR activities, during which, together with our colleagues, we do our best to create a better future. Let's be fun – company-wide bike races and soccer matches, film marathons in our cinema room or other engaging team-building activities – we got it covered! Let's be diverse – every team member is valued, regardless of gender, nationality, religious beliefs, disability, age, and sexual orientation or identity. Your qualifications, experience, and mindset are our greatest benefit!
Senior Azure Databricks Specialist
BRAVER IT SPÓŁKA AKCYJNA
⚲ Kraków
150–195 zł netto (+ VAT) / godz.
Wymagania
- Databricks
- PySpark
- SQL
- Microsoft Azure
- ADF
- Storage
- Microsoft Power BI
- Git
- Azure DevOps
Nasze wymagania: 3–6+ years of experience in Data Engineering Strong experience with Databricks, PySpark, SQL Experience with Azure ecosystem (ADF, Storage, SQL) Ability to work independently in a dynamic environment Availability to start immediately or within a short notice period English level: B2+ Mile widziane: Power BI Experience with CI/CD (Azure DevOps, Git) Knowledge of Lakehouse architecture Zakres obowiązków: Build and maintain scalable data pipelines using Databricks (PySpark, SQL) Develop data transformation workflows (ETL/ELT) Work with Azure data services (ADF, Storage, SQL DB) Optimize data processing and performance Collaborate with international teams and stakeholders Oferujemy: Immediate project start Fast-track recruitment process Large-scale data platform experience Flexible work model (remote/hybrid) Long-term cooperation
Programista T-SQL/.NET/Dynamics 365 (m/k/n)
Mago S.A.
⚲ Nadarzyn (pow. pruszkowski)
Wymagania
- Microsoft SQL Server
- T-SQL
- ASP.NET
- JavaScript
Nasze wymagania: Posiadasz doświadczenie w samodzielnej pracy na stanowisku Programisty (min. 2 lata). Praktycznie znasz Microsoft SQL Server (2012-2025) oraz język T-SQL. Masz doświadczenie w rozwoju systemów klasy ERP. Wykształcenie informatyczne lub pokrewne. Samodzielnie analizujesz i rozwiązujesz problemy. Łatwo komunikujesz się z osobami na różnych szczeblach organizacji, o różnym stopniu zaawansowania technicznego. Mile widziane: Znajomość ERP Softlab. ASP .NET, JavaScript. NET WPF. Umiejętność korzystania z komponentów DevExpress ASP.NET. Dynamics 365 (Sales, SDK). Zakres obowiązków: Rozwój systemu ERP (Softlab) na potrzeby wewnętrzne firmy. Techniczne wsparcie dla użytkowników. Udział w projektach biznesowych (CRM, WMS, Raporty Power BI, bieżące wsparcie techniczne m.in. dla Logistyki i Produkcji). Współpraca z zespołem programistów i analityków w celu poprawy wydajności stosowanych rozwiązań oraz utrzymania dotychczasowych rozwiązań systemowych. Oferujemy: Pracę w polskiej dynamicznie rozwijającej się firmie o międzynarodowym zasięgu, z możliwościami rozwoju, w zespole Specjalistów i Ekspertów. Pracę z bazą danych posiadającą ponad 700 tys. obiektów i miliony rekordów. Narzędzia pracy dostosowane do stanowiska i niezbędne do realizacji powierzonych zadań. Stabilne warunki zatrudnienia. Szkolenia adaptacyjne i wsparcie w pierwszych miesiącach pracy. Dofinansowanie do pakietów sportowych MultiSport. Program opieki medycznej oraz możliwość dodatkowego ubezpieczenia na życie dla pracownika i członków rodziny. Dofinansowanie do wypoczynku, świadczenia świąteczne (pakiet świadczeń socjalnych ZFŚS).
Principal Data Engineer (PST Overlap)
Appliscale
⚲ Kraków
30 000 - 34 000 PLN netto (B2B)
Wymagania
- Databricks
- Apache Spark
- Data Build Tool (DBT)
- Apache Airflow
- Atlan
- Amazon AWS
- Data Dog
- Tableau Cloud
- Monte Carlo Data
- Fivetran
About the role Our client is one of the largest game studios known for their very successful MOBA and FPS franchises. You will be a member of the Data Foundations team which collects and uses data to improve the player's experience. Data Foundations team is an initiative accountable for decision science, data products, data capture, and data warehousing and governance. Their mission is to harness the power of data for player-centric decisions and AI/ML products that make it better to be a player.You'll work closely with a technical lead to expand data consumption and reporting capabilities and build solutions to improve the development lifecycle for data engineers and insight analysts throughout the organization. You'll be helping to enable advancements and fast iteration in Machine Learning and GenAI pipelines. As a Principal Data Engineer, you will design, build, and maintain scalable data pipelines and models that power critical data products, including centralized player data models, and publishing activity data systems. These systems support understanding of player onboarding funnels, engagement patterns, support efficiency, and content effectiveness. You will collaborate with product managers, analysts, and engineers to ensure our data infrastructure is reliable, performant, and directly tied to delivering better player experiences Responsibilities • Please note, availability to attend afternoon/evening meetings is a requirement for this role as most of the team is located on the US West Coast (LA and Seattle) • Design, build, and optimize scalable ETL pipelines for structured and semi-structured data supporting Insights use cases, growth metrics, and other publishing/marketing focused data sets. • Design and implement data models using industry best practices that capture a complete ecosystem view of internal customer experiences, while ensuring accuracy, scalability,and long-term usability • Architect and implement robust, maintainable, and high-performance data solutions • Automate workflows to reduce manual intervention and enhance data processing efficiency, including automation for content, growth, and pubsports areas of coverage • Optimize query performance and resolve pipeline bottlenecks to improve data accessibility • Evaluate and adopt new tools, frameworks, and methodologies to advance data engineering capabilities • Support cost optimization by ensuring scalable and efficient data solutions • Ensure data quality, governance, and compliance with regulatory standards (e.g., GDPR, CCPA) • Contribute to engineering discipline by shaping infrastructure, craft standards, tooling, and organizational best practices Required qualifications • Minimum of 8 years commercial work experience in data engineering or a related field • Bachelor's or higher degree in Computer Science, Software Engineering, or a related field • Proficiency in Python, essential for data processing and analysis tasks • Hands-on experience with DBT (Data Build Tool) and Airflow • Commercial experience using PySpark and Databricks • Proficiency in AWS cloud infrastructure (AWS) • Effective communication and teamwork skills Nice to have • Experience with ML Operations and GenAI pipelines and infrastructure • Experience in the gaming industry, particularly with online multiplayer games • Experience working with cross-discipline organizations that build data products • Proficient in large-scale data manipulation across various data types • Demonstrated ability to troubleshoot and optimize complex ETL pipelines • Proven experience mentoring and guiding other engineers Tech stack • Databricks • Unity Catalog • Apache Spark + PySpark + SQL • DBT (Data Build Tool) • Apache Airflow + Astronomer • Atlan • Tableau Cloud • Monte Carlo Data • Fivetran • Datadog • GIT/github • AWS
People Manager – Data Engineering
SQUARE ONE RESOURCES sp. z o.o.
⚲ Warszawa, Mokotów
140–170 zł netto (+ VAT) / godz.
Wymagania
- AWS
- Google Cloud Platform
- agile
- scrum
- ETL
- SQL
- Python
Nasze wymagania: Minimum 4 years of experience managing technical teams (10+ FTE) in tech, digital, or e-commerce environments Proven experience leading engineering teams to deliver against business objectives Hands-on experience in Agile leadership roles (Agile Coach, Scrum Master, Product Owner, or similar) Strong understanding of Agile methodologies (SCRUM, iterative delivery, team facilitation) Demonstrated experience in coaching, mentoring, and developing engineers Ability to act as a trusted advisor on People Strategy and organizational growth Excellent stakeholder management and communication skills Strong relationship-building capabilities across technical and business teams Experience creating collaborative, innovative engineering environments Exposure to cloud platforms (AWS or GCP) is a strong advantage Experience working within data or engineering organizations is preferred Fluency in English (spoken and written) Mile widziane: Ability to work in a hybrid model with regular presence in Varsseveld and Utrecht Willingness to travel occasionally within Europe O projekcie: You will join an international Data Office responsible for delivering data-driven innovation across the organization. The department consists of approximately 20 professionals and is rapidly scaling its engineering capabilities. The role focuses on leading a team of ~15 engineers operating within an Agile SCRUM environment. You will work closely with the Head of Data, Product Owners, Architects, and Tech Leads to ensure team performance, capability growth, and alignment with strategic objectives. The mission of this position is to build a high-performing engineering organization, establish a strong Center of Excellence, and cultivate a collaborative culture that supports continuous improvement, knowledge sharing, and professional development. The environment leverages modern cloud-based data platforms (AWS/GCP) and emphasizes Agile leadership, coaching, and empowerment. Zakres obowiązków: Lead and manage a team of approximately 15 engineers within the Data Office Build and maintain a high-performing team culture aligned with organizational values Establish and drive team rituals supporting collaboration, delivery excellence, and continuous improvement Coach and mentor engineers using a formal development framework, supporting career growth and performance development Act as a people leader responsible for engagement, wellbeing, and performance management Drive the creation of a Data Engineering Center of Excellence, promoting best practices and knowledge sharing Own the full talent lifecycle: hiring, onboarding, development, and retention Support Agile ways of working, guiding teams in SCRUM delivery and removing impediments Collaborate with Product Owners, Architects, and Tech Leads to ensure strategic alignment Act as a trusted advisor to leadership regarding people strategy and organizational growth Ensure clarity of objectives, foster trust, and empower teams to deliver high-quality solutions Promote innovation and continuous improvement within a cloud-based engineering environment Support cross-functional collaboration across international teams Contribute to workforce planning and capability development
PL/SQL Developer with GCP (f/m/x)
Sii Sp. z o.o.
⚲ Białystok, Centrum, Bydgoszcz, Gdańsk, Oliwa, Katowice, Kraków, Podgórze, Lublin, Łódź, Śródmieście, Piła, Poznań, Wilda, Rzeszów, Szczecin, Toruń, Warszawa, Mokotów, Wrocław, Fabryczna
Wymagania
- SQL
- BigQuery
- Python
- Google Cloud Platform
- Oracle Applications
- Git
- SAS
- Google Cloud Dataform
- Vertex AI
Nasze wymagania: At least 3 years of professional experience in mass data processing Strong proficiency in SQL, with hands‑on experience in BigQuery and/or Oracle Knowledge of Dataform on GCP or practical experience with similar transformation tools or a strong willingness to learn Dataform Experience with Google Cloud Platform (GCP) and Git Python skills applied in data analytics or data engineering contexts Fluent English (both spoken and written) Fluent Polish required Residing in Poland required O projekcie: We are looking for a PL/SQL Developer to join our team on a project for a globally recognized leader in the digital payments industry. In this role, you will become part of a strategic Data Transformation team within Analytics, focused on migrating data and analytics processes to the Google Cloud Platform (GCP). This role offers a unique opportunity to combine hands-on data engineering with collaboration across technical and business teams, while co-creating high engineering and data standards for the entire department and driving cloud modernization initiatives. Zakres obowiązków: Migrate existing analytics and data processes from Oracle, Python, and SAS on-prem environments to Google BigQuery, leveraging Dataform Design, develop, and maintain data pipelines and analytical solutions using BigQuery, SQL, Python, Dataform, and Vertex AI Contribute to the modernization of the Analytics Data Platform, helping to build a new, cloud native environment on GCP using modern architectural concepts and technologies Participate in department-wide standards development, quality assurance best practices, and architectural discussions; proactively propose improvements and optimizations Provide technical support to team members, collaborating closely with analytics, data engineering, and business teams to translate business needs into scalable technical solutions Oferujemy: Great Place to Work since 2015 - it’s thanks to feedback from our workers that we get this special title and constantly implement new ideas Employment stability - revenue of PLN 2.1BN, no debts, since 2006 on the market We share the profit with Workers - over PLN 76M has already been allocated for this aim since 2022 Attractive benefits package - private healthcare, benefits cafeteria platform, car discounts and more Comfortable workplace – class A offices or remote work Dozens of fascinating projects for prestigious brands from all over the world – you can change them thanks to Job Changer application PLN 1 000 000 per year for your ideas - with this amount, we support the passions and voluntary actions of our workers Investment in your growth – meetups, webinars, training platform and technology blog – you choose Fantastic atmosphere created by all Sii Power People
Oracle Developer with German
Link Group
⚲ Remote
21 840 - 28 560 PLN (B2B)
Wymagania
- Oracle
- SQL
- Oracle database
- PL/SQL
- AWS
- Azure
- OCI
- Python
- dbt
- Starburst
- Trino
- C4 Model
- ADR’s
- CI/CD
- GitLab CI
- GitHub Actions
- Jenkins
- Communication skills
- Problem-Solving
O projekcie: We are looking for a highly skilled Senior Oracle Developer with a strong background in data engineering and a command of the German language. In this role, you will bridge the gap between traditional relational database excellence and modern cloud data architectures. You will be responsible for designing, optimizing, and maintaining complex Oracle environments while building scalable data pipelines using Python and dbt within a Lakehouse ecosystem. Wymagania: Technical Requirements - Oracle & SQL: Expert-level proficiency in Oracle databases, PL/SQL programming, and deep knowledge of database internals (optimization, indexing, partitioning). - Cloud Data: Strong experience in designing and scaling relational databases in cloud environments (AWS/Azure/OCI). - Python: Solid hands-on experience in Python 3.x, specifically for data engineering and automation tasks. - Data Engineering Tools: Proven track record with dbt and experience with distributed query engines like Starburst/Trino. - Architecture: Ability to visualize complex systems (C4 Model) and a structured approach to technical documentation (ADRs). - DevOps: Practical experience with CI/CD tools (e.g., GitLab CI, GitHub Actions, or Jenkins) and a "Shift-Left" approach to quality. Language & Soft Skills - German: Minimum B2/C1 level – ability to participate in technical discussions, document solutions, and collaborate with German-speaking stakeholders. - English: Communicative level for internal global collaboration. - Analytical Mindset: Exceptional problem-solving skills with a focus on system performance and reliability. Codzienne zadania: - Database Engineering: Lead the design, modeling, and performance tuning of cloud-based relational databases. - Oracle Mastery: Develop and optimize advanced SQL and PL/SQL code, including partitioning strategies, backup/recovery procedures, and high-availability setups. - Data Pipelines: Build and maintain robust data integration flows and system integrations using Python (3.x). - Modern Data Stack: Leverage dbt (Core/Cloud) and Starburst/Trino to transform and orchestrate data within Lakehouse environments. - Architecture & Governance: Define technical architecture using the C4 model, document decision-making through ADRs, and maintain high-quality documentation in Confluence. - DevOps Integration: Own the end-to-end lifecycle of your code by implementing and managing CI/CD pipelines for automated testing, quality assurance, and seamless deployment.
Middle Data Engineer
N-iX
⚲ Gdynia, Krakow, Wrocław
16 561 - 20 241 PLN (B2B)
Wymagania
- Python
- Azure
- Snowflake
- dbt
- ADF
- SQL
- Docker (nice to have)
- Spark (nice to have)
O projekcie: #4991 Our customer is one of the world's largest food and beverage companies, operating in 189 countries with over 2000 brands. The company's diverse portfolio includes products such as bottled water, dairy products, breakfast cereals, coffee, confectionery, frozen food, pet foods, snacks, and baby food. We are looking for a Middle Data Engineer to design, build, and maintain scalable data systems that support analytics, reporting, and machine learning use cases. This role requires strong ownership, solid SQL skills, and experience working with large datasets in production environments. Wymagania: - 2–4 years of experience in Data Engineering or a related role. - Main technical skills are: Snowflake, dbt, ADF. - Solid SQL skills (joins, aggregations, window functions). - Understanding of data governance and data quality concepts. - Proficiency in Python. - Experience with ETL/ELT tools. - Hands-on experience with the Azure cloud platform. - Azure Ops experience: provisioning and configuring. - Experience working with data warehouses (e.g., Snowflake, BigQuery, Redshift). - Understanding of data modeling concepts (e.g., star schema). Nice to have - Basic knowledge of Docker or containerization. - Experience with data processing frameworks (e.g., Spark). Soft skills - Strong analytical thinking and problem-solving skills. - Ability to work independently with some guidance. - Good communication skills with technical and non-technical stakeholders. - Willingness to learn and grow in a collaborative environment. Codzienne zadania: - Build and maintain data pipelines (batch and near real-time). - Develop and optimize data models for analytics and BI tools. - Write efficient SQL queries for large-scale datasets. - Support development and maintenance of ETL/ELT processes. - Ensure data quality, integrity, and consistency across systems. - Collaborate with data analysts, data scientists, and product teams to understand data requirements. - Monitor and troubleshoot data pipelines and workflows. - Contribute to improving the performance of the existing data infrastructure. - Work with cloud-based data platforms and modern data stack tools.
Mid Azure Data Growth Engineer
BRAVER IT SPÓŁKA AKCYJNA
⚲ Kraków
140–180 zł netto (+ VAT) / godz.
Wymagania
- PySpark
- Python
- SQL
- Microsoft Azure
- Microsoft Power BI
- Git
Nasze wymagania: 1–4 years of experience in Data Engineering or similar role Hands-on experience with PySpark/Python and SQL Basic knowledge of Azure (Databricks / Data Factory / Storage) Understanding of ETL/ELT concepts Willingness to grow in cloud data technologies English level: B2+ Mile widziane: Experience with Databricks Power BI Exposure to CI/CD or Git Zakres obowiązków: Support development and maintenance of data pipelines (ETL/ELT) Work with Azure Databricks (PySpark, SQL) Process and transform data from multiple sources Assist in building data ingestion workflows Collaborate with senior engineers and cross-functional teams Contribute to data quality and optimization initiatives Oferujemy: Growth opportunity in a large-scale data project Mentoring and collaboration with experienced engineers Modern tech stack (Azure + Databricks) Flexible work model (remote/hybrid) Fast recruitment process
Lead/Senior Data Scientist (Generative AI) 3287
RITS Professional Services
⚲ Warszawa
150 - 170 PLN/h netto (B2B)
Wymagania
- PyTorch / NumPy
- Python
- Azure, GCP
- SQL
- FAISS, Pinecone, Weaviate
- MCP / systemy dla agentów LLM
- Machine Learning & Generative AI
- API Generative AI
- HuggingFace, LangChain, LangGraph
RITS Group to firma informatyczna, która oferuje najwyższej jakości usługi programistyczne zarówno dla rynku polskiego, jak i międzynarodowego. Jesteśmy dumni, że należymy do grona najszybciej rozwijających się spółek technologicznych. Naszym priorytetem jest jakość oraz innowacyjność dostarczanych rozwiązań, a nasza kultura organizacyjna opiera się na zaufaniu, współpracy i ciągłym doskonaleniu. Poszukujemy Lead/Senior Data Scientist (Generative AI)! Dołącz do zespołu i pracuj przy nowoczesnych rozwiązaniach opartych o Generative AI oraz LLM. Zespół koncentruje się na innowacyjnych zastosowaniach takich jak: RAG, systemy Talk-to-Data, multi-agent workflows, fine-tuning modeli oraz zaawansowane przetwarzanie języka naturalnego (NLP). To środowisko dla osób, które chcą rozwijać się w pracy z najnowszymi technologiami AI i współtworzyć rozwiązania end-to-end dla klientów. Oferujemy • Forma współpracy: B2B • Stawka: 150–170 zł/h netto + VAT • 100% zdalnie Obowiązki • Budowa aplikacji GenAI end-to-end (chatboty, voiceboty, systemy Talk-to-Data) • Tworzenie pipeline’ów danych: ingestion, retrieval layer oraz orkiestracja (np. LangChain, LlamaIndex, LangGraph) • Projektowanie i wdrażanie rozwiązań RAG (vector DB, hybrid search, rerankery, ewaluacja) • Dobór modeli, prompt engineering oraz fine-tuning (LoRA, QLoRA, SFT) • Integracja modeli LLM z narzędziami, API i źródłami danych (Model Context Protocol – MCP) • Tłumaczenie potrzeb biznesowych na rozwiązania techniczne • Wsparcie delivery projektów oraz działań pre-sales Wymagania • Znajomość teorii deep learning • Doświadczenie w Machine Learning, szczególnie w Generative AI (LLM/LMM) • Praktyka w NLP lub modelach multimodalnych • Doświadczenie w pracy z wymaganiami biznesowymi – od ich analizy, przez projektowanie rozwiązań technicznych, przygotowanie danych i budowę modeli, aż po ich walidację i wdrożenie • Umiejętność pracy z danymi: przetwarzanie, feature engineering, ewaluacja modeli, deployment • Bardzo dobra znajomość Python (OOP) oraz SQL • Doświadczenie z bazami wektorowymi • Praca w środowisku chmurowym (Azure lub GCP) • Znajomość narzędzi i bibliotek Deep Learning i GenAI: NumPy, PyTorch, HuggingFace, LangChain, LangGraph • Doświadczenie z API GenAI (np. OpenAI, Gemini) • Praktyczne doświadczenie z MCP (serwery/klienci dla agentów LLM) Mile widziane • Doświadczenie z Databricks • Komercyjne projekty w obszarze GenAI, NLP lub Computer Vision • Znajomość architektury mikroserwisowej • Doświadczenie z repozytoriami kodu i narzędziami wspierającymi development • Umiejętności biznesowe i analityczne • Kreatywność w rozwiązywaniu problemów klientów • Doświadczenie w prowadzeniu lub mentoringu członków zespołu Ta rola nie jest dla Ciebie, ale masz znajomego, który by się sprawdził? Poleć go i zgarnij nawet 5000 zł. Referral Program: Talent from your network
Konsultant/Konsultantka wdrożeniowy Oracle EPM
IT CONNECT Sp. z o.o. Sp. k.
⚲ Warszawa
13 000–20 000 zł / mies. (zal. od umowy)
Wymagania
- Oracle Cloud EPM
- BPCS
- FCCS
- HFM
- Hyperion Planning
- EPM/CPM
Nasze wymagania: Minimum 4 lata doświadczenia w pracy z systemami Oracle (Cloud EPM, BPCS, FCCS, HFM, Hyperion Planning) Doświadczenie w modelowaniu wielowymiarowym Doświadczenie w projektowaniu i wdrażaniu rozwiązań dla klientów zewnętrznych Dobra znajomość zagadnień z obszaru EPM/CPM Wykształcenie wyższe informatyczne lub pokrewne (np. z elementami ekonometrii) Znajomość zagadnień controllingu, rachunkowości zarządczej i budżetowania Znajomość języka angielskiego na poziomie min. B2 O projekcie: Masz doświadczenie we wdrażaniu wielowymiarowych systemów budżetowych i chcesz rozwijać się w projektach dla klientów z różnych branż? Dołącz do naszego zespołu i pracuj przy realizacji rozwiązań z obszaru Enterprise Performance Management (EPM) oraz Business Intelligence w środowisku Oracle. Zakres obowiązków: Samodzielne modelowanie i wdrażanie wielowymiarowych systemów budżetowych Udział w projektowaniu koncepcji oraz architektury rozwiązań IT dla klientów Implementacja rozwiązań z zakresu integracji danych Implementacja rozwiązań Business Intelligence Praca z systemami klasy EPM/CPM (Enterprise/Corporate Performance Management) Współpraca z zespołem projektowym oraz klientem na różnych etapach wdrożenia Oferujemy: Możliwość rozwoju poprzez program certyfikacji oraz szkolenia wewnętrzne i zewnętrzne Wynagrodzenie adekwatne do doświadczenia i kompetencji Udział w ciekawych, międzynarodowych projektach Realny wpływ na rozwój własnych kompetencji i ścieżki kariery Pakiet benefitów (m.in. karta Lunch Pass, prywatna opieka medyczna, dostęp do biblioteki multimedialnej) Stabilne zatrudnienie w firmie specjalizującej się w rozwiązaniach Performance Management i Business Intelligence
Développeur Talend Senior (h/f)
emagine Polska
Wymagania
- Documentation
- Microsoft SQL Server
- Atlassian JIRA
- Git
- DataStage (ETL)
- SQL
- MySQL
- PostgreSQL
- Data Integration
- ETL
emagine recherche pour l’un de ses clients grand compte un développeur Talend Senior. Démarrage : ASAP Durée : > 12 mois Télétravail : Hybride Déplacements réguliers en région à prévoir Résumé Le Développeur Talend Senior est responsable de la conception, du développement et du déploiement de solutions d'intégration de données, contribuant ainsi à l'efficacité des projets ETL/ELT au sein de l'organisation. Responsabilités : • Maîtrise de Talend Data Integration pour des projets ETL/ELT. • Gestion des jobs Talend, des composants standards et avancés. • Conception et mise en œuvre de flux d’intégration complexes. • Rédaction de spécifications techniques et documentation des développements. • Utilisation de Git pour la gestion de versions. • Respect des méthodologies de projet, y compris l'utilisation de Jira. Must Haves : • Maîtrise de Talend Data Integration (Open Studio / Studio). • Solide compétence en SQL. • Expérience avec des bases de données relationnelles (Oracle, SQL Server, PostgreSQL, MySQL). • Capacité à concevoir des flux d'intégration complexes. • Autonomie sur la conception et le développement des flux. Nice to Haves : • Connaissance générale de Semarchy XDM. • Bon relationnel et sens du service. Détails Complémentaires : • Environnement : Déploiement en DEV / QUAL / PROD. • Outils : Utilisation de Jira, Git.
Data engineer
Santander Bank Polska
⚲ Poznań, Warszawa, Wrocław
Wymagania
- SQL
- Python
- Git
- GitLab
Nasze wymagania: masz co najmniej 3-letnie doświadczenie w inżynierii danych, potrafisz tworzyć i monitorować pipeline’y (Airflow i SSIS), potrafisz uchwycić dane z różnych źródeł i je ustrukturyzować, ubierając w formę niezawodnego automatycznego rozwiązania, kręci Cię tematyka ETL i chcesz się w niej rozwijać. Cieszą Cię zadania, w których współtworzysz rozwiązania wykorzystywane przez innych, swobodnie poruszasz się po bazach danych (łączysz dane z różnych źródeł) i znasz SQL na poziomie co najmniej średniozaawansowanym (efektywne łączenie danych, indeksy, funkcje okienkowe), znasz Pythona i chcesz się dalej uczyć tego języka, lubisz współpracę i burzę mózgów w zespole, lubisz code review i dbasz o najwyższą jakość kodu, ciekawią Cię dane i szerszy kontekst ich wykorzystania, charakteryzujesz się dokładnością, samodzielnością, odpowiedzialnością i realizujesz zadania do końca z poszanowaniem terminów, swobodnie się wypowiadasz: masz odwagę, aby zabierać głos i przedstawiać swoje zdanie, dbasz o skuteczną komunikację z innym jednostkami budując przy tym dobre, długoterminowe relacje, masz wykształcenie wyższe powiązane z matematyką, analizą danych, statystyką, ekonometrią lub ekonomią. Mile widziane: certyfikaty i kursy potwierdzające kompetencje inżynieryjne, doświadczenie w pracy na chmurze, w szczególności na platformie Databricks, doświadczenie z Apache Spark, praktyczna znajomość partycjonowania i Liquid Clustering w Databricks, znajomość Git/GitLab i podstawowych zasad CI/CD Zakres obowiązków: przetwarzanie danych z dbałością: czyszczenie uzupełnianie i integrowanie tworzenie i utrzymywanie procesów zasilających tworzących warstwy danych używając airflow ssis w przyszłości na platformie databricks sprawdzanie danych: krytyczna ocena i weryfikacja upraszczanie i automatyzowanie procesów analitycznych ale także tworzenie ich od podstaw zamieniając często uciążliwą manualną pracę wielu osób w niezawodny automat optymalizowanie rozwiązań aby zasilenia trwały jak najkrócej i pochłaniały jak najmniej zasobów dokumentowanie procesów łączenia i przetwarzania danych wyciąganie wniosków z danych i proponowanie zmian w ich przetwarzaniu aktywna współpraca w zespole wychodzenie z inicjatywą Oferujemy: umowę o pracę, pracę w trybie hybrydowym w lokalizacjach z początku oferty, prywatną opiekę medyczną Medicover (z możliwością wykupienia pakietu stomatologicznego), system kafeteryjny z punktami do wymiany na aktywności, np. wypoczynkowe, kulturalne, gastronomiczne lub dofinansowanie do przedszkola/żłobka/klubu dziecięcego, kartę Multisport, ofertę produktową banku oraz pożyczki na pracowniczych warunkach, aktywności wellbeingowe, np. Kluby Zainteresowań, webinary, ćwiczenia, podcasty; inicjatywy w biurach np. mammobus, profilaktyczne badanie wzroku, warsztaty z samoobrony, wyprawkę dla dzieci, benefity dla osób z niepełnosprawnościami: dodatek finansowy na cele prozdrowotne, dodatkowe 2 dni urlopu dla pracowników z lekkim stopniem niepełnosprawności, całkowita praca zdalna, jeśli charakter pracy na to pozwala, platformę ze szkoleniami zewnętrznymi i wewnętrznymi – np. programowanie czy analiza danych lub komunikacja, podejście strategiczne, negocjacje i inne, program poleceń pracowniczych – nagrodę pieniężną za polecenie osoby do pracy w banku.
Data Quality and Lineage Lead
Andersen
⚲ Lublana, Belgrad, Zagrzeb, Budapest, Bratislava
Wymagania
- AWS
- SQL
- Python
Andersen is hiring a Data Quality and Lineage Lead for a project building a governed, scalable data platform focused on improving data quality, lineage tracking, and transparency across banking systems. The customer is a financial services organization providing banking products and digital solutions for individual and business clients. The company operates through an established service network and online platforms and is part of a larger international financial group, supporting ongoing development of its services for a broad client base. The project is focused on transforming the bank’s fragmented data landscape into a governed and scalable data foundation with near real-time capabilities. The initiative introduces data governance by design, federated data ownership, and business-oriented data products to improve analytics, operational efficiency, and regulatory transparency while enabling data-driven decision-making across the organization. Responsibilities: - Defining data quality dimensions (Accuracy, Completeness, Timeliness, Validity), set thresholds, and implementing automated scorecards and monitoring dashboards. - Designing and documenting Tier 0 / Tier 1 lineage approaches to ensure clear traceability of critical data assets from source to consumption. - Leading data quality issue triage, support remediation prioritization, and governing the quality improvement process across teams. - Partnering with data engineering and BI teams to embed quality checks into ETL/ELT pipelines and ensure governance standards are met. - Implementing and managing data quality and lineage tools (Collibra, Great Expectations, dbt, Alation, Atlan) to automate monitoring and ensure compliance. - Creating and maintaining technical documentation for data quality rules, lineage mappings, and governance workflows. - Ensuring data processes meet regulatory and audit requirements by establishing clear traceability and data validation protocols. Requirements: - Experience in data quality, data governance, data analysis, or a related field for 7+ years, with a proven track record of implementing enterprise-wide data quality and data lineage solutions. - Deep experience with governance platforms (Collibra + Ataccama) and data quality frameworks (Great Expectations, dbt). - Strong understanding of the modern data stack, including cloud platforms (AWS, Azure, GCP), data warehouses (Databricks), and data lake formats (Delta Lake). - Hands-on experience with SQL, Python (Pandas), and big data technologies (Apache Spark) for data profiling, validation, and analysis. - Proficiency with BI tools (Tableau, Power BI, Looker) to build quality dashboards and visualize data health metrics. - Solid understanding of data modeling concepts and dimensional design for BI and data warehousing. - Proven ability to define data quality dimensions, set thresholds, and design lineage approaches that ensure regulatory-grade traceability and auditability. - Exceptional stakeholder management skills. Ability to facilitate discussions between technical and business audiences, present findings, and drive governance alignment. - Level of English – from Upper-Intermediate and above. Nice to Have: - Azure Certified Data Analytics – Specialty or Azure Certified Business Intelligence. Reasons why this job would be interesting to you: - Experience in teamwork with leaders in FinTech, Healthcare, Retail, Telecom, and others. Andersen cooperates with such businesses as Samsung, Siemens, Johnson & Johnson, BNP Paribas, Ryanair, Mercedes, TUI, Verivox, Allianz, T-Systems, etc.. - The opportunity to change the project and/or develop expertise in an interesting business domain. - Job conditions – you can work both fully remotely and from the office or can choose a hybrid variant. - Guarantee of professional, financial, and career growth! The company has introduced systems of mentoring and adaptation for each new employee. - The opportunity to earn up to an additional 1,000 USD per month, depending on the level of expertise, which will be included in the annual bonus, by participating in the company's activities. - Access to the corporate training portal, where the entire knowledge base of the company is collected and which is constantly updated. - Bright corporate life (parties / pizza days / PlayStation / fruits / coffee / snacks / movies). - Certification compensation (AWS, PMP, etc). - Referral program. - English courses. - Private health insurance and compensation for sports activities. Your personal data is protected in accordance with GDPR regulations. Learn more: https://andersenlab.com/privacy-policy Join us! https://people.andersenlab.com/
Data Platform Architect (with GCP) - Automation & Innovation Department
T-Mobile
⚲ Warszawa, Mokotów
Wymagania
- Google Cloud Platform
- BigQuery
- Kubernetes
- Data Warehouse
- Spark
Nasze wymagania: 5+ years experience in data platform architecture or engineering. Deep knowledge of GCP (BigQuery, Dataflow, Dataproc Pub/Sub), Spark, Kubernetes, DWH, Big Data. Proven leadership in hybrid cloud/on-premise integration projects. Strong skills in technical documentation, system design and stakeholder management. Experience with cloud-native migrations and hybrid architectures. Familiarity with data governance and advanced analytics frameworks. Zakres obowiązków: Architecture & Standards Define architectural standards and blueprints for hybrid cloud and on‑premise data platforms. Set and maintain best practices for scalability, reliability and maintainability. Evaluate and recommend technologies and methodologies aligned with business and compliance needs. Design & Integration Lead the design and integration of Spark‑based analytics, Kubernetes‑orchestrated workloads and GCP services. Document and communicate data architecture, integration patterns and workflows for technical audiences. Collaboration & Platform Evolution Collaborate with engineering and business teams on data analytics platform initiatives. Guide teams through migration, modernization and optimization projects. Support teams in troubleshooting, performance tuning and strategic planning. Oferujemy: Współpraca w oparciu o umowę direct B2B Pakiet medyczny, sportowy i ubezpieczenie na życie na preferencyjnych warunkach Dostęp do platformy szkoleniowej RoDoS Brak dress code'u - u nas możesz być sobą Znam Talent – szkolenia lub pieniądze za polecenie znajomych do pracy 😊! Poza tym możesz u nas liczyć na dostęp do naszych produktów i usług na preferencyjnych warunkach
Data Migration Engineer – doświadczenie w branży ubezpieczeniowej
Upvanta
⚲ Remote
25 200 - 29 400 PLN (B2B)
Wymagania
- Python
- Apache Airflow
- EIS (nice to have)
O projekcie: Do zespołu Data Advanced Engineering poszukujemy osoby na stanowisko Data Migration Engineer. Osoba na tym stanowisku będzie odpowiedzialna za projektowanie, planowanie oraz realizację migracji danych ubezpieczeniowych z istniejącej platformy do nowego systemu. Oferujemy: - start projektu: ASAP - pracę zdalną - udział w ciekawym, strategicznym projekcie w branży ubezpieczeniowej Wymagania: Wymagania: - praktyczna znajomość Python - doświadczenie z Apache Airflow - obowiązkowa znajomość branży ubezpieczeniowej, w szczególności - doświadczenie w projektach związanych z migracją danych Mile widziane: - znajomość EIS Codzienne zadania: - projektowanie i planowanie procesu migracji danych - implementacja i wykonanie migracji danych między systemami - analiza danych oraz mapowanie struktur między platformami - współpraca z zespołami technicznymi i biznesowymi w celu zapewnienia poprawności danych - dbanie o jakość i spójność migrowanych danych
Data Engineering Team Lead
Remodevs
⚲ Warszawa
8 000 - 9 000 USD brutto (UoP)
Wymagania
- Cloud
- ETL
- SQL
Overview We are a leader in AI-powered business operations, providing digital solutions and consulting services that help companies improve and grow. We focus on using advanced technology to make operations more efficient and find new ways to create value, especially in private capital markets. Our system includes three main parts: • PaaS (Platform as a Service): An AI-based platform to improve workflows, generate insights, and speed up value creation. • SaaS (Software as a Service): A cloud platform that offers high performance, intelligence, and scalable solutions. • S&C (Solutions and Consulting): Modular tools and guides to help companies manage, grow, and optimize performance. With over ten years of experience working with fast-growing companies and private equity-backed platforms, we have strong expertise and a track record of turning technology into a strategic advantage. The Opportunity We are looking for a Senior Data Architect with strong skills in cloud data architecture, data modeling, SQL, and data governance for large-scale systems. Responsibilities • Create and maintain enterprise data strategies, standards, and designs to support operations, analytics, and AI/ML workloads. • Build cloud-based data solutions on platforms like AWS (Redshift, RDS, Glue, Lake Formation) or similar, ensuring they are scalable, secure, and cost-effective. • Define and enforce data modeling standards (dimensional models, denormalized schemas, OLTP/OLAP designs). • Manage data transformation using DBT, ensuring models are modular, tested, and documented. • Lead design of data integration using Prefect or Airflow, including batch, real-time, event-driven, and API-based processes. • Implement data quality, validation, and testing to ensure accurate and consistent data. • Set up data governance for quality, lineage, cataloging, classification, and access control. • Work with engineers and analytics teams to turn business needs into scalable data designs. • Recommend data tools, technologies, and platforms, and guide technical decisions. • Optimize data storage, indexing, and performance for large datasets. • Document data contracts, schemas, and interfaces across teams. • Ensure data architectures support AI/ML, including feature stores and model datasets. • Review architecture and code to meet standards and maintainability. • Cleanse and validate data, handling errors carefully. • Mentor data engineers on best practices. • Support automated deployment and CI/CD for data infrastructure. Requirements • Experience managing and mentoring a team. • 7+ years in data architecture, data engineering, or related roles. • 5+ years building cloud-based data architectures (AWS, GCP, or Azure). • 5+ years writing complex SQL queries. • 5+ years creating ETL/ELT pipelines with Airflow, Prefect, or similar tools. • Experience with DBT for transformations, testing, and documentation. • Knowledge of data warehouse design (OLTP, OLAP, star/snowflake schemas). • Experience with data modeling tools and methodologies. • Experience with cloud data warehouses like Redshift, Snowflake, or BigQuery. • Familiarity with data governance, quality frameworks, and metadata management. • Understanding of AI/ML data needs, including feature stores and vector retrieval. • Bachelor’s degree in Computer Science or similar (preferred). Pluses (optional skills): Python (Pandas, PySpark), Docker, Kubernetes, CI/CD, AWS Lambdas/Step Functions, Databricks, vector databases, data mesh/fabric, graph databases, cloud certifications. Why Join Us? We value people who solve problems creatively, learn quickly, work well in diverse teams, and aim high. We work hard but also make sure to have fun along the way.
Data Engineer – Central Monitoring Platform (CMP)
emagine Polska
⚲ Lisbon
Wymagania
- Documentation
- Microsoft Platform
- GxP (Good anything Practice)
- Operations
- maintenance
- SQL
- Python
- Spark
- PySpark
- Microsoft Azure
We are looking to onboard one additional Data Engineer to support our Central Monitoring Platform (CMP). The primary objective is to increase capacity for business-as-usual operations while contributing to smaller initiatives and improvement activities for the remainder of the year. This is not a lead role. The consultant will join an existing setup with experienced Data Engineers and contribute hands-on in a regulated environment. Assignment Overview The consultant will provide data engineering services supporting CMP delivery and operations, including but not limited to: • Development, maintenance, and operation of data pipelines • Onboarding of new data, studies, and clinical trials • BAU support such as incident and ticket handling, root-cause analysis, and operational follow-ups • Participation in smaller initiatives and scoped projects • Documentation of pipelines and data flows in line with agreed quality standards The assignment will be shaped continuously based on priorities. Must-Have Technical Skills • Strong hands-on experience with PySpark • Proven experience with Azure and Databricks • Strong Python and SQL skills (Spark SQL experience is relevant) • Experience building and operating production-grade data pipelines • Domain and compliance experience • Experience from clinical or regulated environments is strongly preferred, ideally from GxP-relevant contexts or environments with strong documentation, quality controls, and auditability Working Style Hands-on, pragmatic and delivery-oriented profile. Comfortable working with both BAU operational tasks and smaller improvement initiatives. Able to work independently while collaborating closely with the team. Team Context The consultant will join our Tech Team, which acts as an enabler team in our setup. The team currently consists of a Solution Architect, Validation Lead, System Manager and two Data Engineers, distributed across Denmark, Croatia, and India, and made up of a mix of external consultants. Practical Information • Start: As soon as possible • Duration: Until end of year
Data Engineer (GCP)
co.brick
⚲ Gliwice
180 - 190 PLN/h netto (B2B)
Wymagania
- Google Cloud Platform
- Data
Data Engineer (GCP) – 100% Remote | Projekt dla klienta Zinnia Lokalizacja: 100% zdalnieKlient: USAJęzyk: Angielski (wymagany)Start: 01.05.2026 (data nieprzesuwalna)Czas trwania: 3 miesiące (z dużą szansą na przedłużenie)Stawka: do 190 PLN/h (B2B)Wymagany: Background Check Opis projektu:Poszukujemy doświadczonego Data Engineera dla naszego klienta. Projekt ma charakter długoterminowy, mimo początkowego kontraktu na 3 miesiące. Wymagania: • Doświadczenie na poziomie Regular+ / Senior • Bardzo dobra znajomość SQL • Praktyczne doświadczenie z GCP (Google Cloud Platform) • Znajomość: BigQuery • Cloud Functions • Airflow (Kubernetes) • dbt • Airbyte • Terraform • Umiejętność pracy z Python (pandas) • Doświadczenie w pracy z systemami monitoringu/incydentów (np. PagerDuty) • Doświadczenie w pracy w środowisku produkcyjnym oraz data pipelines Oferujemy: • 100% pracy zdalnej • Możliwość długoterminowej współpracy • Projekt dla międzynarodowego klienta (USA) • Pracę w nowoczesnym stacku data (GCP + narzędzia ETL/ELT) Dodatkowe informacje: • Proces rekrutacyjny może potrwać nieco dłużej ze względu na wymagany background check • Start projektu jest sztywny (01.05.2026)
Data Engineer (Azure Databricks)
ITEAMLY SPÓŁKA Z OGRANICZONĄ ODPOWIEDZIALNOŚCIĄ
⚲ Kraków
155–190 zł netto (+ VAT) / godz.
Wymagania
- PySpark
- Databricks
- SQL
- Data Factory
- Azure SQL
- Storage
- Microsoft Power BI
Nasze wymagania: 5+ years of experience in Data Engineering Strong hands-on experience with: PySpark (DataFrames, optimization techniques); Databricks; SQL Experience with Azure services: Data Factory; Azure SQL / Storage Knowledge of cloud environments (preferably Azure) Strong problem-solving skills and ability to work independently Good communication skills in English (B2+) Mile widziane: Power BI Azure certifications (AZ-900 or similar) Databricks certification O projekcie: We are looking for an experienced Data Engineer with strong PySpark skills to join a large-scale Azure data platform project. You will work in a distributed environment, focusing on building and optimizing data pipelines. Zakres obowiązków: Develop and maintain data pipelines using PySpark and SQL Work with Azure Databricks for large-scale data processing Build and maintain ingestion and data transformation workflows Perform data wrangling and integrate multiple data sources Optimize Spark jobs (DataFrames, partitioning, clustering, SparkSQL) Work with data formats such as Delta, Parquet, and CSV Collaborate with stakeholders across multiple time zones Create and maintain technical documentation Oferujemy: 100% remote work International project environment Modern data stack (Azure + Databricks) Fast recruitment process Long-term cooperation
Data Engineer (Azure / Databricks)
ITEAMLY SPÓŁKA Z OGRANICZONĄ ODPOWIEDZIALNOŚCIĄ
⚲ Warszawa
145–190 zł netto (+ VAT) / godz.
Wymagania
- Databricks
- PySpark
- SQL
- ADF
- Storage
- SQL DB
- Lakehouse
- Medallion
- Git
- Azure DevOps
- Microsoft Power BI
Nasze wymagania: 2–6 years of experience in Data Engineering Strong experience with: Databricks ecosystem; PySpark & SQL Experience with: Azure Data Services (ADF, Storage, SQL DB); Data modeling and ETL/ELT design Understanding of Lakehouse / Medallion architecture Experience with CI/CD (Git, Azure DevOps) Strong communication and stakeholder management skills English level: B2+ Mile widziane: Power BI Unity Catalog Experience in enterprise-scale projects O projekcie: We are looking for a Data Engineer (mid-level) to support the development of a modern data platform in Azure. This role combines hands-on engineering with collaboration in a client-facing environment. Zakres obowiązków: Design and develop scalable data pipelines using Databricks, PySpark, Python, and SQL Build and maintain ETL/ELT workflows (Azure Data Factory) Work with modern data architectures (Data Lake / Lakehouse) Integrate and transform data from multiple sources Collaborate with business and technical stakeholders Participate in CI/CD processes for data workflows Support data modeling and analytics integration (e.g. Power BI) Oferujemy: Work in a hybrid model (Warsaw office) Opportunity to join a large-scale, global data project Modern cloud-based tech stack Stable contract with extension Fast onboarding process