Oferty pracy
Filtry
Business Intelligence Specialist
Shiji Poland
⚲ Katowice
15 120 - 18 480 PLN (B2B)
Wymagania
- BI
- Power BI
- Tableau
- QuickSight (nice to have)
O projekcie: Shiji is a global technology company dedicated to providing innovative solutions for the hospitality industry, ensuring seamless operations for hoteliers day and night. Built on the Shiji Platform—the only truly global hotel technology platform—Shiji’s cloud-based portfolio includes Property Management System, Point-of-Sale, guest engagement, distribution, payments, and data intelligence solutions for over 91,000 hotels worldwide, including the largest chains. The best hotels run on Shiji — day and night. About the Team The Analytics team focuses on Business Intelligence solutions – delivering software for interactive dashboards and data-driven insights. We're currently building a new BI tool: a web application with SSO integration, deployment tooling, and microservices for access management. We handle data integration and ETL processes (Athena, Redshift), data modeling, and dashboard development (with dedicated BI developers). The solution is fully deployed on AWS (SQS, Vault, S3, Lambdas), with backend microservices in .NET and a Windows service for data extraction. Deployment is automated end-to-end. - Amazon QuickSight - Microsoft Power BI - Tableau Wymagania: - previous working experience as a Business Intelligence Specialist or Data Engineer for at least 2 years, - good understanding of Relational Database and Data Warehouse concepts, - advanced working knowledge of SQL, - university degree in Computer Science, Engineering, Finance, or similar relevant fields, - basic knowledge of Agile process and principles, - outstanding communication and presentation skills, - good organizational and time management skills, - sharp analytical and problem-solving skills, - attention to details Nice to have: - familiar with microservices architecture and REST APIs, - previous experience working in SCRUM team Codzienne zadania: - facilitate business domain knowledge from different Shiji products, - understand and analyze the data from Business Intelligence point of view, - help to design and implement data models which are going to be used to build insights, - build dashboards which will allow to discover trends in data, using a state-of-the-art Business Intelligence tool, - build and optimize SQL queries for the respective data, - maintain technical documentation.
Senior Business Intelligence Engineer
Experis Polska
⚲ Remote
24 360 - 26 040 PLN (B2B)
Wymagania
- Data models
- Power BI
- SQL
- SharePoint
- Security
O projekcie: Senior Business Intelligence Engineer 100% remote B2B: 145-155 PLN/h net As a Business Intelligence Senior Engineer, you are responsible for design and implementation of the solutions that enable our clients and business partners taking efficient decisions with ease. You are able to design, implement and deploy reports, dashboards, data models and access rules for end users. Requirements: What we offer: - 100% remote - Multisport card - Private healthcare - Life insurance - PowerYou – e-learning platform Opis Firmy Experis to światowy lider rekrutacji specjalistów i kadry zarządzającej w kluczowych obszarach IT. Z nami znajdziesz konkurencyjne oferty zatrudnienia oraz ciekawe projekty IT skierowane zarówno do ekspertów z wieloletnim doświadczeniem, jak i osób, które dopiero zaczynają swoją przygodę w branży IT. Oferujemy rekrutacje menedżerów i wysoko wykwalifikowanych konsultantów z doświadczeniem w branży IT. Experis jest częścią ManpowerGroup i został uznany za jedną z najbardziej etycznych firm na świecie. Wymagania: Requirements: - 5+ years of commercial experience in similar role - Strong experience with Power BI - Experience in using various data sources, e.g. SQL, SharePoint, flat files, web sources. - Strong experience in and understanding of relational and dimensional databases design and implementation. - Understanding of the importance of governance and security rules when handling data - Experience in converting business problems/challenges to technical solutions considering security, performance, scalability etc. - Experience in enterprise-class solution implementations - Flexible and proactive/self-motivated working style with strong personal ownership of problem resolution excellent inter-personal skills - Very good English (verbal and written) - Ideally, French language skills (nice to have) Codzienne zadania: - Solving complex business problems at large scale. - Understanding business needs and fulfilling them with the right solution. - Participation in a cross-functional initiatives and collaboration across various domains. - Acting as SME for business intelligence related issues in large projects. - Interacting with data engineers, product/project managers and partners from all over the world.
SAS Expert
Experis Polska
⚲ Remote
23 520 - 26 880 PLN (B2B)
Wymagania
- SAS
- Data analytics
- Performance tuning
- SQL
O projekcie: 100% remote B2B: 140-160 zł/h netto + VAT For our project carried out for a large international bank, we are looking for an experienced SAS Expert to support data analytics and migration initiatives. What we offer: - 100% remoe - Multisport card - Private healthcare - Life insurance Opis Firmy Experis to światowy lider rekrutacji specjalistów i kadry zarządzającej w kluczowych obszarach IT. Z nami znajdziesz konkurencyjne oferty zatrudnienia oraz ciekawe projekty IT skierowane zarówno do ekspertów z wieloletnim doświadczeniem, jak i osób, które dopiero zaczynają swoją przygodę w branży IT. Oferujemy rekrutacje menedżerów i wysoko wykwalifikowanych konsultantów z doświadczeniem w branży IT. Experis jest częścią ManpowerGroup i został uznany za jedną z najbardziej etycznych firm na świecie. Wymagania: Requirements: - 9–12 years of hands-on experience in SAS development and analytics across multiple domains - Strong expertise in SAS Base and Advanced SAS programming - Proficiency in SAS Enterprise Guide, SAS DI Studio, and SAS E-Miner - Solid experience with SQL for data manipulation and integration with SAS - Proven experience in code migration, optimization, and performance tuning - Strong problem-solving skills with the ability to troubleshoot SAS jobs independently - Experience working with large-scale datasets and analytics projects - Strong attention to detail and commitment to high-quality, compliant code delivery Preferred Qualifications: - Experience in the banking or financial services domain - Familiarity with automation testing frameworks for data validation Codzienne zadania: - Develop, enhance, and maintain SAS programs using SAS Base and Advanced SAS - Perform code migration, optimization, and performance tuning across different environments - Analyze large-scale datasets and support data analytics initiatives - Troubleshoot SAS jobs and resolve code-related issues efficiently - Write, test, and validate SAS programs ensuring quality, accuracy, and compliance - Collaborate with business units and handle them independently during migration activities - Work closely with cross-functional teams to ensure smooth integration of SAS solutions - Support SQL-based data manipulation and integration with SAS workflows
Data Engineer GCP
Experis Polska
⚲ Remote
25 200 - 28 560 PLN (B2B)
Wymagania
- GCP
- BigQuery
- Python
- API
- Data pipelines
- SQL
- Data models
- Google cloud platform
- AWS
- Azure
- Tableau (nice to have)
- Power BI (nice to have)
O projekcie: Data Engineer (GCP) Location: 100% remote Salary: 150-170 PLN net/h (B2B with Experis) Offer: - Multisport card - Private healthcare - Access to an e‑learning platform - Group life insurance Opis Firmy Experis to światowy lider rekrutacji specjalistów i kadry zarządzającej w kluczowych obszarach IT. Z nami znajdziesz konkurencyjne oferty zatrudnienia oraz ciekawe projekty IT skierowane zarówno do ekspertów z wieloletnim doświadczeniem, jak i osób, które dopiero zaczynają swoją przygodę w branży IT. Oferujemy rekrutacje menedżerów i wysoko wykwalifikowanych konsultantów z doświadczeniem w branży IT. Experis jest częścią ManpowerGroup i został uznany za jedną z najbardziej etycznych firm na świecie. Wymagania: Requirements: - Experience with Google Cloud Platform (ideally Google, but AWS/Azure acceptable) - Proficiency in Cloud Storage management - Strong BigQuery skills including dataset/table creation, scheduling, data transfers, and Notebook use - Experience with building and automating data pipelines - Python skills for data upload, transformation, and automation scripting - Experience working with APIs for data extraction and integration - Strong SQL skills (joins, filtering, aggregations, calculations, data modification) - Tableau/Power BI skills are optional, as these can be taught Codzienne zadania: - Working with Cloud Storage to manage and organize data - Creating datasets and tables in BigQuery - Managing data transfers, scheduling, and using BigQuery Notebook (Python/API) - Building and automating data pipelines using cloud services - Scheduling event‑driven tasks - Writing Python scripts for data uploads, transformations, and automation - Using API requests to extract and integrate data from external sources - Writing SQL queries for joining tables, filtering, aggregating, calculating, and modifying data - Optional) Creating data models from multiple data sources for analytics - Optional) Building dashboards and visualizations in Tableau or Power BI
Data Engineer
Experis Polska
⚲ Wrocław
21 840 - 21 840 PLN (B2B)
Wymagania
- Data pipelines
- Azure Databricks
- Data Lake
- Databricks
- Security
- Python
- Spark
- Azure Data
- Azure Data Factory
- Qlik
- IBM MQ
- API
- REST API
- JSON
- ETL
- Data modeling
- CI/CD Pipelines
- Git
- Agile (nice to have)
- Big data (nice to have)
- Performance tuning (nice to have)
- Power BI (nice to have)
O projekcie: Data Engineer Location: hybrid, 2-3 days per week in the Wrocław office Salary: 130 PLN/h B2B This role is hands-on, highly collaborative, and ideal for someone who enjoys turning complex data challenges into reliable, production-grade solutions. Offer: - Multisport Card - Life insurance - Private healthcare - PowerYou platform Opis Firmy Experis to światowy lider rekrutacji specjalistów i kadry zarządzającej w kluczowych obszarach IT. Z nami znajdziesz konkurencyjne oferty zatrudnienia oraz ciekawe projekty IT skierowane zarówno do ekspertów z wieloletnim doświadczeniem, jak i osób, które dopiero zaczynają swoją przygodę w branży IT. Oferujemy rekrutacje menedżerów i wysoko wykwalifikowanych konsultantów z doświadczeniem w branży IT. Experis jest częścią ManpowerGroup i został uznany za jedną z najbardziej etycznych firm na świecie. Wymagania: Required Skills & Qualifications: - Strong expertise in Azure Databricks, Python, and Spark (must-have). - Solid experience with Azure Data Services, such as:Azure Data Lake and Azure Data Factory - Experience in ingestion tools like: Qlik Recplicate / MQ / MQCC - Experience with API development (REST/JSON or equivalent). - Solid understanding of ETL processes, data modeling, and fundamental data architecture principles. - Familiarity with CI/CD pipelines and version control (Git, etc.). - Strong problem‑solving skills and structured way of working within an agile delivery model. - Excellent communication and collaboration skills - you enjoy interacting with business stakeholders and cross-functional teams. Nice-to-Have Skills - Experience with big data processing and performance tuning in Spark. - Knowledge of data governance, lineage, and security best practices. - Exposure to Power BI or similar visualization tools. Codzienne zadania: - Design, develop, and optimize data pipelines and workflows using Azure Databricks and Python. - Implement and maintain data solutions across the Azure Data Platform (Data Lake, Data Factory, Databricks, etc.). - Develop and maintain APIs supporting data ingestion, data sharing, and interoperability. - Collaborate closely with business stakeholders to gather requirements and translate them into technical solutions. - Ensure data quality, security, and compliance across all solutions. - Integrate data solutions into existing business processes and cross-functional systems. - Continuously improve performance, cost-efficiency, and scalability of data systems.
Specjalista ds. Zarządzania Usługami IT (SLA/CMBD/Raportowanie/Automatyzacja) (m./k./os.)
Asseco Poland S.A.
⚲ Warszawa, Wilanów
Wymagania
- ServiceNow
- Jira
- Microsoft Power BI
- SQL
Nasze wymagania: Min. 4 lata doświadczenia w obszarze zarządzania usługami IT Znajomość frameworku ITIL v4: Wiedza z zakresu Service Level Management (SLM) oraz Service Asset and Configuration Management (SACM) Doświadczenie w zarządzaniu CMDB: Praktyczna umiejętność modelowania relacji między zasobami (CI – Configuration Items) oraz dbania o integralność danych Umiejętność definiowania i monitorowania KPI/SLA: Doświadczenie w operacjonalizacji wskaźników biznesowych na parametry techniczne oraz rozliczaniu dostawców z jakości świadczonych usług Doświaczenie w mapowaniu procesów: Umiejętność audytowania istniejących procedur IT i przekładania ich na czytelne schematy/dokumentację Znajomość narzędzi klasy ITSM: Praktyczna praca w systemach takich jak ServiceNow, Jira Service Management Znajomość j.angielskiego na poziomie umożliwiająca swobodne mówienie i czytanie Wykształcenie wyższe (Informatyka, Zarządzanie Projektami, Finanse i Rachunkowość) Mile widziane: Data Visualization (Power BI): Umiejętność tworzenia interaktywnych dashboardów Podstawy SQL: Samodzielne wyciąganie danych bezpośrednio z baz systemów zgłoszeniowych Automatyzacja (Low-code/No-code): Znajomość narzędzi typu Microsoft Power Automate, zapytań JQL wewnątrz Jira Certyfikaty: ITIL 4 Foundation, PRINCE2 O projekcie: Jeśli masz doświadczenie w zarządzaniu usługami IT, swobodnie poruszasz się w obszarze SLA, raportowania jakości usług i pracy z CMDB, a procesy i dane traktujesz jak realne narzędzia wpływu na biznes — ta rola jest dla Ciebie. Do wewnętrznego zespołu teleinformatyki Asseco Poland tworzymy nowe, eksperckie stanowisko, które będzie odpowiadało za uporządkowanie i rozwój obszaru zarządzania usługami IT w organizacji o dużej skali infrastrukturalnej. Szukamy osoby, która nie tylko monitoruje wskaźniki, ale potrafi je interpretować, egzekwować zapisy umów i inicjować usprawnienia procesowe. To rola dla kogoś, kto chce mieć realny wpływ na jakość i mierzalność usług IT w jednej z największych organizacji technologicznych w Polsce i Europie. Forma pracy: hybrydowa – 3 dni w tygodniu w biurze przy ul. Branickiego 13 w Warszawie. Zakres obowiązków: Inwentaryzacja procedur: Analiza, korekta i umiejscowienie w procesach Monitorowanie i rozliczanie SLA: Analiza parametrów dostępności i czasu reakcji zgodnie z zawartymi umowami oraz wyliczanie kar Raportowanie wydajności usług: Przygotowywanie okresowych zestawień dotyczących jakości świadczonych usług IT dla interesariuszy Zarządzanie jakością danych w CMDB: Cykliczna weryfikacja, aktualizacja i audyt rejestrów zasobów oraz relacji między nimi Automatyzacja procesów kontrolnych: Wdrażanie narzędzi i procedur zapewniających automatyczny przepływ informacji o zmianach w infrastrukturze Współpraca z dostawcami i gestorami usług: Wyjaśnianie rozbieżności w raportach oraz wsparcie w definiowaniu nowych parametrów jakościowych
Database Administrator (Azure SQL Database)
Scalo
⚲ Gdańsk
16 800 - 20 160 PLN (B2B)
Wymagania
- MSSQL
- PostgreSQL
- Azure SQL Database
- Azure Data
- ARM
- SQL
- PowerShell
- Bash
- Perl
- NoSQL
- MongoDB
- ITIL
O projekcie: Cześć! U nas znajdziesz to, czego szukasz - przekonaj się! W Scalo czeka na Ciebie - Praca zdalna (sporadyczne wizyty w biurze w Gdańsku – do ustalenia) - Stawka: do 120 zł netto + VAT/H/B2B Co dla Ciebie mamy? - Stabilna współpraca Wyzwania technologiczne i praca z nowoczesnymi rozwiązaniami. Internal Mobility - nie musisz zmieniać firmy, żeby zmienić projekt - Ambasadorzy SCALO Rozwijaj kompetencje techniczne i presalesowe, wpływaj na kierunek rozwoju organizacji i wspieraj innych - Budowanie marki osobistej Twórz wartościowe treści, dziel się wiedzą i doświadczeniem, występuj jako ekspert na wydarzeniach (np. ProMeet) - rozwijaj swoją markę w naturalny sposób - Benefit box Pełna opieka medyczna, karta MultiSport i szeroka oferta Motivizera - wybieraj benefity, które dbają o Twoje zdrowie, energię i codzienny komfort - Program poleceń Poleć znajomego z IT, a resztą zajmiemy się my – on dostaje nowy projekt, Ty bonus. Win–win w najczystszej postaci - Integracje i wydarzenia firmowe Wspólne wyjścia, firmowe eventy i okazje do poznania się poza projektami – bo dobre relacje tworzą najlepsze zespoły Brzmi interesująco? Aplikuj – czekamy na Twoje CV! Wymagania: Ta oferta jest dla Ciebie, jeśli: - Masz min. 5 lat doświadczenia w administracji bazami danych na poziomie enterprise - Znasz MSSQL i PostgreSQL i potrafisz zarządzać użytkownikami oraz uprawnieniami - Masz praktyczne doświadczenie z Microsoft Azure: Azure SQL Database i Azure Database for PostgreSQL - Potrafisz korzystać z ARM templates i znasz dobre praktyki monitoringu, bezpieczeństwa i optymalizacji kosztów w chmurze - Rozwiązujesz incydenty i przeprowadzasz analizę przyczyn problemów (root cause analysis) - Optymalizujesz wydajność baz danych i piszesz efektywne zapytania SQL - Tworzysz skrypty i automatyzujesz procesy w PowerShell, Bash, Perl lub SQL - Masz podstawową wiedzę o bazach NoSQL, np. MongoDB - Znasz procesy ITIL (Incident, Problem, Change Management) Codzienne zadania: - Zarządzanie użytkownikami i dostępami w bazach danych MSSQL i PostgreSQL - Rozwiązywanie incydentów i problemów związanych z bazami danych - Monitorowanie wydajności baz danych oraz wprowadzanie optymalizacji, w tym wykorzystanie Zabbix do alertów i raportowania - Planowanie i wykonywanie operacji backupu oraz odzyskiwania danych - Tworzenie skryptów i automatyzacja powtarzalnych procesów (PowerShell, Bash, Perl, SQL) - Współpraca z zespołami DevOps, infrastruktury i aplikacji w zakresie utrzymania baz danych - Tworzenie dokumentacji technicznej i raportów dotyczących konfiguracji, incydentów i optymalizacji - Utrzymanie wysokiej dostępności i wydajności baz danych w środowiskach On-Prem i Azure - Analiza przyczyn problemów (root cause analysis) i wdrażanie działań naprawczych
Data Architect / Data Modeller
Antal
⚲ Kraków
30 240 - 33 600 PLN (B2B)
Wymagania
- Data management
- Data modelling
- Analytical skills
- Visual Paradigm
- IBM
- Hadoop (nice to have)
- Cloudera (nice to have)
- GCP (nice to have)
- Azure (nice to have)
- Cloud (nice to have)
- BigQuery (nice to have)
- Kafka (nice to have)
- Flink (nice to have)
- PUB (nice to have)
- MDM (nice to have)
- Enterprise Architect (nice to have)
- TOGAF (nice to have)
O projekcie: Data Architect / Data Modeller Hybrid model – 6 days per month onsite (Kraków) Rate: 180–200 PLN/h B2B We are looking for an experienced Data Architect / Data Modeller to join the CIB Data & Analytics IT organisation, responsible for defining data management principles and delivering core data services across a large‑scale financial environment. The role combines architectural leadership, modelling expertise, and close collaboration with global stakeholders in a dynamic and demanding setting. What We Offer: - Long‑term B2B cooperation at 180–200 PLN/h. - Hybrid work model with 6 days per month onsite in Kraków. - Private medical care package LuxMed. - Multisport card. - Work in a global, modern, data‑driven environment within CIB Data & Analytics IT. - Opportunity to influence enterprise‑level data architecture and modelling standards. Wymagania: Core Skills & Requirements: - 5+ years of experience as a Data Architect in a large, complex, geographically dispersed organisation. - Strong understanding of data value, data management, and applying technology to business strategy. - Experience in Financial Services, especially in data‑focused domains (Data Architecture, Data Governance, Data Management). - Proven logical data modelling skills and experience delivering conceptual and logical models. - Ability to think abstractly, recognise architectural patterns, and see the bigger picture. - Knowledge of industry modelling standards (BIAN, FSDM, BDW). - Strong analytical skills with the ability to challenge designs and resolve complex issues. - Ability to quickly understand existing data landscapes and apply modelling skills to new contexts. - Experience with modelling tools such as Visual Paradigm, IBM Infosphere Data Architect, or Erwin. - Excellent communication skills, including virtual collaboration across regions. - Strong planning, organisation, and time‑management abilities. - Ability to work independently with high self‑awareness and ownership. Nice to Have: - Experience with Hadoop ecosystems (Cloudera/HortonWorks). - Exposure to GCP, Azure, or equivalent cloud data platforms (BigQuery, Dataflow, Dataproc). - Experience with streaming technologies (Kafka, Flink, Pub/Sub). - Understanding of integration patterns and MDM principles. - Familiarity with enterprise architecture frameworks such as TOGAF. - Experience working in agile delivery environments. Codzienne zadania: - Understand business requirements through close collaboration with the CIB Chief Data Office, business users, and subject matter experts. - Translate business needs into solution designs, functional requirements, and data architecture deliverables. - Communicate functional solutions to development teams and support them throughout implementation. - Work with production support teams to ensure new solutions are well understood and maintainable. - Promote data architecture best practices, standards, and modelling techniques across the organisation. - Contribute to conceptual, logical, and semantic data models supporting trusted sources and reusable data assets. - Participate in design reviews and ensure alignment with the CIB Data Architecture strategy. - Provide thought leadership and influence stakeholders to adopt consistent data standards. - Operate effectively in a distributed, multi‑regional environment requiring strong communication and collaboration.
Senior SQL Engineer with BigQuery
GFT Poland
⚲ Kraków, Warszawa, Wrocław, Poznań, Łódź
14 500 - 22 300 PLN (PERMANENT)
Wymagania
- SQL
- BigQuery
- GCP
- data vault
- Python
- Terraform
- Airflow
- CI/CD
- Data engineering
- ETL
- data fusion (nice to have)
- cdap (nice to have)
- dataproc (nice to have)
- Java (nice to have)
O projekcie: Why join GFT? You will work with and learn from top IT experts. You will join a crew of experienced engineers: 60% of our employees are senior level. Interested in the cloud? You will enjoy our full support in developing your skills: training programs, certifications and our internal community of experts. We have strong partnerships with top cloud providers: Google, Amazon and Microsoft - we are number one in Poland in numbers of GCP certificates. We are focused on development and knowledge sharing. Internal expert communities provide a comfortable environment where you can develop your skillset in areas such as blockchain, Big Data, cloud computing or artificial intelligence. You will work in a stable company (32 years on the market) in demanding and challenging projects for the biggest financial institutions in the world. We offer you: - Hybrid work from – 2 office day per week - Working in a highly experienced and dedicated team - Benefit package that can be tailored to your personal needs (private medical coverage, sport & recreation package, lunch subsidy, life insurance, etc.) - On-line training and certifications fit for career path - Access to e-learning platform - Mindgram - a holistic mental health and wellbeing platform - Work From Anywhere (WFA) - the temporary option to work remotely outside of Poland for up to 140 days per year (including Italy, Spain, the UK, Germany, Portugal, and Bulgaria) - Social events Wymagania: We are looking for a hands-on Senior SQL Developer with a strong BigQuery background and experience in working with large data platforms. This role is about expert SQL skills, modern DataOps, and building data pipelines in the cloud. You will focus on designing reliable data models and creating efficient ETL/ELT workflows using Data Vault. You will join our agile team to ensure our data solutions are secure, stable, and ready for business use. Location & Model: Hybrid. You can work from one of our offices in Krakow, Warsaw, Lodz, Poznan, or Wroclaw. Candidates based outside of Krakow are required to visit the Krakow office for 2 days once a month. Your skills - At least 3–4 years of experience in SQL optimization and complex data work in BigQuery - Experience in Data Vault modeling and usage - Good knowledge of Python and Terraform for automation - Hands-on experience with Cloud Composer (Airflow), Cloud Run, and Pub/Sub - Practical skills with Git for version control - Experience with CI/CD tools (Ansible, Jenkins) for cloud applications - Ability to work in a DataOps model and Agile environment - Strong problem-solving and analytical skills - Willingness to learn new things and work well in a team - English level B2+ Nice to have - Experience with GCP Data Fusion or CDAP (ingesting and parsing CSV, JSON, XML data) - Experience with Dataproc and handling data from RESTful/SOAP APIs or SFTP servers - Understanding of Data Contract best practices - Basic Java development skills (e.g., for custom Data Fusion plugins) - Experience with automated testing tools for cloud data solutions Codzienne zadania: - Design, build, and deploy data models and transformations in BigQuery (complex SQL, stored procedures, partitioning, clustering) - Create and manage ETL/ELT pipelines to move and transform data into Data Vault models - Optimize SQL queries and data jobs to make them faster and more cost-effective - Connect data from different sources (APIs, SFTP) and ensure it is accurate - Manage code and CI/CD pipelines using tools like Git, Jenkins, and Ansible - Use Google Secret Manager to keep applications and credentials secure - Monitor data pipelines, fix bugs, and suggest improvements - Talk to business stakeholders to understand their needs and create technical plans
Senior SQL Consultant with BigQuery
GFT Poland
⚲ Kraków, Warszawa, Wrocław, Poznań, Łódź
21 000 - 27 384 PLN (B2B)
Wymagania
- SQL
- BigQuery
- GCP
- data vault
- Python
- Terraform
- Airflow
- CI/CD
- Data engineering
- ETL
- data fusion (nice to have)
- cdap (nice to have)
- dataproc (nice to have)
- Java (nice to have)
O projekcie: Why join GFT? You will work with and learn from top IT experts. You will join a crew of experienced engineers: 60% of our employees are senior level. Interested in the cloud? You will enjoy our full support in developing your skills: training programs, certifications and our internal community of experts. We have strong partnerships with top cloud providers: Google, Amazon and Microsoft - we are number one in Poland in numbers of GCP certificates. We are focused on development and knowledge sharing. Internal expert communities provide a comfortable environment where you can develop your skillset in areas such as blockchain, Big Data, cloud computing or artificial intelligence. You will work in a stable company (32 years on the market) in demanding and challenging projects for the biggest financial institutions in the world. Wymagania: We are looking for a hands-on Senior SQL Developer with a strong BigQuery background and experience in working with large data platforms. This role is about expert SQL skills, modern DataOps, and building data pipelines in the cloud. You will focus on designing reliable data models and creating efficient ETL/ELT workflows using Data Vault. You will join our agile team to ensure our data solutions are secure, stable, and ready for business use. Location & Model: Hybrid. You can work from one of our offices in Krakow, Warsaw, Lodz, Poznan, or Wroclaw. Candidates based outside of Krakow are required to visit the Krakow office for 2 days once a month. Your skills - At least 3–4 years of experience in SQL optimization and complex data work in BigQuery - Experience in Data Vault modeling and usage - Good knowledge of Python and Terraform for automation - Hands-on experience with Cloud Composer (Airflow), Cloud Run, and Pub/Sub - Practical skills with Git for version control - Experience with CI/CD tools (Ansible, Jenkins) for cloud applications - Ability to work in a DataOps model and Agile environment - Strong problem-solving and analytical skills - Willingness to learn new things and work well in a team - English level B2+ Nice to have - Experience with GCP Data Fusion or CDAP (ingesting and parsing CSV, JSON, XML data) - Experience with Dataproc and handling data from RESTful/SOAP APIs or SFTP servers - Understanding of Data Contract best practices - Basic Java development skills (e.g., for custom Data Fusion plugins) - Experience with automated testing tools for cloud data solutions Codzienne zadania: - Design, build, and deploy data models and transformations in BigQuery (complex SQL, stored procedures, partitioning, clustering) - Create and manage ETL/ELT pipelines to move and transform data into Data Vault models - Optimize SQL queries and data jobs to make them faster and more cost-effective - Connect data from different sources (APIs, SFTP) and ensure it is accurate - Manage code and CI/CD pipelines using tools like Git, Jenkins, and Ansible - Use Google Secret Manager to keep applications and credentials secure - Monitor data pipelines, fix bugs, and suggest improvements - Talk to business stakeholders to understand their needs and create technical plans
Senior Data Engineer (Databricks)
Addepto
⚲ Warsaw, Gdańsk, Wrocław, Białystok, Kraków, Poznań, Katowice
21 000 - 28 560 PLN (B2B)
Wymagania
- Python
- SQL
- ETL
- Azure
- Airflow
- Databricks
- Spark
- Docker
- CI/CD (nice to have)
- Kubernetes (nice to have)
- Kafka (nice to have)
- Power BI (nice to have)
- Dagster (nice to have)
- dbt (nice to have)
O projekcie: Addepto is a leading AI consulting (addepto.com/ai-consulting/) and data engineering (addepto.com/data-engineering-services/) company that builds scalable, ROI-focused AI solutions for some of the world's largest enterprises and pioneering startups, including Rolls Royce, Continental, Porsche, ABB, and WGU. With an exclusive focus on Artificial Intelligence and Big Data, Addepto helps organizations unlock the full potential of their data through systems designed for measurable business impact and long-term growth. The company's work extends beyond client engagements. Drawing from real-world challenges and insights, Addepto has developed its own product - ContextClue - and actively contributes open-source solutions to the AI community. This commitment to transforming practical experience into scalable innovation has earned Addepto recognition by Forbes as one of the top 10 AI consulting companies worldwide. As part of KMS Technology, a US-based global technology group, Addepto combines deep AI specialization with enterprise-scale delivery capabilities—enabling the partnership to move clients from AI experimentation to production impact, securely and at scale. As a Senior Data Engineer, you will have the exciting opportunity to work with a team of technology experts on challenging projects across various industries, leveraging cutting-edge technologies. Here are some of the projects we are seeking talented individuals to join: - Design and development of a universal data platform for global aerospace companies. This Azure and Databricks powered initiative combines diverse enterprise and public data sources. The data platform is at the early stages of the development, covering design of architecture and processes, as well as giving freedom for technology selection. - Data Platform Transformation for energy management association body. This project addressed critical data management challenges, boosting user adoption, performance, and data integrity. The team is implementing a comprehensive data catalog, leveraging Databricks and Apache Spark/PySpark, for simplified data access and governance. Secure integration solutions and enhanced data quality monitoring, utilizing Delta Live Table tests, established trust in the platform. The intermediate result is a user-friendly, secure, and data-driven platform, serving as a basis for further development of ML components. - Design of the data transformation and following data ops pipelines for global car manufacturer. This project aims to build a data processing system for both real-time streaming and batch data. We’ll handle data for business uses like process monitoring, analysis, and reporting, while also exploring LLMs for chatbots and data analysis. Key tasks include data cleaning, normalization, and optimizing the data model for performance and accuracy. Discover our perks and benefits: - Work in a supportive team of passionate enthusiasts of AI & Big Data. - Engage with top-tier global enterprises and cutting-edge startups on international projects. - Enjoy flexible work arrangements, allowing you to work remotely or from modern offices and coworking spaces. - Accelerate your professional growth through career paths, knowledge-sharing initiatives, language classes, and sponsored training or conferences, including a partnership with Databricks, which offers industry-leading training materials and certifications. - Choose your preferred form of cooperation - B2B or a contract of mandate - and enjoy 20 fully paid days off. - Participate in team-building events and utilize the integration budget. - Celebrate work anniversaries, birthdays, and milestones. - Access medical and sports packages, eye care, and well-being support services, including psychotherapy and coaching. - Get full work equipment for optimal productivity, including a laptop and other necessary devices. - With our backing, you can boost your personal brand by speaking at conferences, writing for our blog, or participating in meetups. Wymagania: What you’ll need to succeed in this role: - At least 5 years of commercial experience implementing, developing, or maintaining Big Data systems. - Strong programming skills in Python: writing a clean code, OOP design. - Strong SQL skills, including performance tuning, query optimization, and experience with data warehousing solutions. - Experience in designing and implementing data governance and data management processes. - Deep expertise in Big Data technologies, including Apache Airflow, Dagster, Databricks, Spark, DBT, and other modern data orchestration and transformation tools. - Experience implementing and deploying solutions in cloud environments (with a preference for Azure). - Knowledge of how to build and deploy Power BI reports and dashboards for data visualization. - Excellent understanding of dimensional data and data modeling techniques. - Consulting experience and the ability to guide clients through architectural decisions, technology selection, and best practices. - Ability to work independently and take ownership of project deliverables. - Master’s or Ph.D. in Computer Science, Data Science, Mathematics, Physics, or a related field. Codzienne zadania: - Design and optimize scalable data processing pipelines for both streaming and batch workloads using Big Data technologies such as Databricks, Apache Airflow, and Dagster. - Architect and implement end-to-end data platforms, ensuring high availability, performance, and reliability. - Lead the development of CI/CD and MLOps processes to automate deployments, monitoring, and model lifecycle management. - Develop and maintain applications for aggregating, processing, and analyzing data from diverse sources, ensuring efficiency and scalability. - Collaborate with Data Science teams on Machine Learning projects, including text/image analysis, feature engineering, and predictive model deployment. - Design and manage complex data transformations using Databricks, DBT, and Apache Airflow, ensuring data integrity and consistency. - Translate business requirements into scalable and efficient technical solutions while ensuring optimal performance and data quality. - Ensure data security, compliance, and governance best practices are followed across all data pipelines.
Senior Data Engineer
G2A.COM
⚲ Warszawa
Wymagania
- SQL
- Python
- Apache Airflow
Nasze wymagania: You have a minimum of 5 years of experience in a similar role You have advanced SQL skills backed by experience working with BigQuery You have experience with Python programming You have experience in Google Cloud environment You have experience in creating and maintaining ETL/ELT processes You have practical knowledge of data warehouse architectures and data modeling Experience with Apache Airflow Experience with DBT (Data Build Tool) Your English language skills are at least at a B2 level, both spoken and written You possess excellent communication skills and the ability to work in a team You're detail-oriented and able to work under time pressure You take ownership of your responsibilities and demonstrate sound decision-making Mile widziane: Experience with Atlassian O projekcie: Join us as a Senior Data Engineer and become part of the expert Data & Tools team, which is the heart of our data warehouse. Your role will involve maintaining, developing, and ensuring the performance of a data warehouse based on Google Cloud Platform, as well as creating and optimizing ETL (Extract, Transform, Load) processes that support effective data management within the organization. You will ensure high data quality by ensuring precision, consistency, and reliability, including through monitoring and implementing appropriate control mechanisms. A key aspect of your role will be to develop existing solutions by implementing modern technologies that support the achievement of business goals. Your responsibilities will include keeping up with the latest trends and best practices in BI and data engineering. We are counting on your extensive experience in implementing innovative solutions that improve analytical systems and processes, increasing their efficiency. You will be working on ambitious projects, using modern technologies, and have unlimited opportunities to expand your skills and competencies. By working with us, you will have the chance to use advanced tools such as Apache Airflow, and dbt. If you'd like to join a team with a great atmosphere, where people enjoy their work, emphasize collaboration, and share knowledge, you've come to the right place – we invite you to apply 😊 Additionally, we offer you flexibility in your work – you can work hybrid or remotely. You will also have access to advanced analytical tools and full technical support. Furthermore, we care about work-life balance, competency development, and engage in charitable initiatives. On top of that, we provide benefits such as a welcome pack, Motivizer vouchers, medical care, a MultiSport card, and discounts on the G2A.COM sales platform. Zakres obowiązków: Develop and maintain data warehouses (BigQuery, Google Cloud Platform) Design, develop, and maintain ETL/ELT data pipelines Ensure high data quality, reliability, and performance optimization Take strong ownership of data solutions end to end Stay up to date with the latest trends and best practices in data engineering, BI, and data warehousing, and proactively introduce improvements Support and collaborate with other team members by sharing knowledge, best practices, and technical guidance Oferujemy: You can choose hybrid or even fully remote work You'll receive the tools necessary for your work, such as a laptop or/ and a phone We can provide your home office with ergonomic furniture and electronic devices, such as: footrests, exercise balls, chairs and lumbar supports, monitors, mousepads, laptop docking stations, mice, keyboards, and headphones We care about the work-life balance and wellbeing of our team We provide opportunities for you to turn your ideas into reality and we appreciate such initiatives We support skill and knowledge development via internal and external trainings We work according to the values of DEI: Diversity, Equity & Inclusion We support grassroots initiatives and charities We offer valuable benefits, such as the welcome pack, Motivizer vouchers (220 points/PLN employees can spend on a variety of services and products from known brands available on the platform), private health care, MultiSport card, the opportunity to participate in internal and external trainings and industry conferences, and many more You will receive discount codes to use on the G2A.COM sales platform so that you can enjoy the benefits of our offers and services
Python Developer/Data Engineer
Link Group
⚲ Remote
20 160 - 26 880 PLN (B2B)
Wymagania
- Python
- SPARQL
- SQL
- ETL
- Google cloud platform
- BigQuery
- Helm
- Kubernetes
- DevOps
- GitHub
- REST API
- GraphQL
- Apache Jena (nice to have)
- IAM (nice to have)
- GCP (nice to have)
- React (nice to have)
O projekcie: We are looking for a Mid-Level (3+y) AND SENIOR (5+y) Data / Knowledge Graph Engineer to support the development of data platforms and semantic data models used for advanced analytics and machine learning use cases. In this role, you will work with knowledge graphs, semantic technologies, and cloud-based data platforms on Google Cloud, helping to build scalable data pipelines and services that enable efficient data integration and querying. You will collaborate with data engineers, developers, and domain experts to transform complex data into structured and reusable knowledge models. Wymagania: - Experience with Python development. - Knowledge of RDF data modeling and SPARQL querying. - Experience with SQL and data processing pipelines (ETL/ELT). - Familiarity with Google Cloud Platform (GCP) services such as: - BigQuery - GKE - Dataproc - Experience with container orchestration and deployment tools (Helm, Kubernetes). - Knowledge of DevOps practices, including GitHub Actions. - Understanding of data APIs (REST / GraphQL). - Basic knowledge of Machine Learning and Knowledge Graph concepts. - Strong communication skills and English at C1 level. - Experience working in Agile environments. Nice to Have - Experience with Apache Jena. - Knowledge of IAM and VPC-SC in GCP. - Experience with React. - Background in pharmaceutical, biotech, or life sciences industry. Codzienne zadania: - Develop and maintain data pipelines and ETL/ELT processes. - Work with semantic data models using RDF and SPARQL. - Build and query knowledge graphs and support their integration with data platforms. - Use tools such as YASGUI to create SPARQL queries and YASR to visualize query results. - Develop backend services and APIs using Python. - Work with REST and GraphQL APIs to integrate data services. - Support deployment and infrastructure management using Terraform and GCP services. - Collaborate with frontend teams when building data-driven interfaces using React. - Work within Agile teams and contribute to CI/CD processes.
Power Platform Developer
Connectis_
⚲ Warszawa
21 840 - 25 200 PLN (B2B)
Wymagania
- Power Platform
- AI
- Power Automate
- SharePoint
- REST API
- C# (nice to have)
- JavaScript (nice to have)
- Python (nice to have)
- TypeScript (nice to have)
- HTML (nice to have)
- CSS (nice to have)
O projekcie: Dla naszego partnera z sektora bankowego poszukujemy osoby na stanowisko Power Platform Developer, która dołączy do zespołu realizującego innowacyjne projekty technologiczne związane z budową cyfrowych rozwiązań opartych na Microsoft Power Platform, chmurze oraz elementach AI. To rola dla osoby, która chce mieć realny wpływ na rozwój nowoczesnych narzędzi wspierających automatyzację procesów wewnętrznych banku, usprawnienie kluczowych operacji administracyjnych i biznesowych oraz zwiększanie bezpieczeństwa i efektywności organizacji. ✨ OFERUJEMY: - Uczestnictwo w spotkaniach integracyjnych oraz meetupach technologicznych, umożliwiających dzielenie się wiedzą i doświadczeniem. - Wsparcie dedykowanej osoby kontaktowej z Connectis, dostępnej w celu pomocy w sprawach związanych z projektem. - Rozwój kariery i długofalową współpracę w firmie o ugruntowanej pozycji na rynku. - Praca hybrydowa z biura w Warszawie 1 lub 2 dni w tygodniu. - 5000 PLN za polecenie znajomych do naszych projektów. - Dostęp do najnowszych technologii i narzędzi pracy. - Szybki i zdalny proces. Dziękujemy za wszystkie zgłoszenia. Pragniemy poinformować, że skontaktujemy się z wybranymi osobami. Wymagania: 🔍 CZEGO OCZEKUJEMY OD CIEBIE - Minimum 3 lata doświadczenia komercyjnego w pracy z Microsoft Power Platform. - Kompetencja optymalizacji aplikacji pod kątem wydajności, skalowalności i jakości rozwiązania. - Wysoko rozwinięte umiejętności komunikacyjne i samodzielność w pracy z interesariuszami. - Praktyczne doświadczenie w tworzeniu Canvas Apps oraz przepływów w Power Automate. - Doświadczenie w tworzeniu komponentów w Power Apps Component Framework (PCF). - Bardzo dobra znajomość ekosystemu Microsoft 365, w szczególności SharePoint. - Doświadczenie w budowaniu automatyzacji i workflow w ramach Power Platform. - Praktyczna znajomość integracji z wykorzystaniem REST API. Mile widziane: - Doświadczenie w integracji aplikacji z usługami chmurowymi oraz rozwiązaniami AI. - Znajomość języków programowania: C#, JavaScript, Python, TypeScript. - Znajomość technologii frontendowych: HTML, CSS. Codzienne zadania: - Bezpośrednia współpraca z zespołami wewnętrznymi przy projektowaniu i wdrażaniu rozwiązań opartych na Microsoft Power Platform. - Analiza wymagań biznesowych i technicznych oraz przekładanie ich na funkcjonalne rozwiązania. - Realizacja pełnego cyklu wytwarzania oprogramowania dla rozwiązań Power Platform. - Projektowanie i tworzenie aplikacji w Power Apps, w szczególności Canvas Apps. - Optymalizacja aplikacji pod kątem wydajności, skalowalności oraz jakości. - Integracja tworzonych rozwiązań z systemami i usługami zewnętrznymi. - Budowanie i rozwijanie przepływów z wykorzystaniem Power Automate. - Wsparcie wdrożeń, szkolenie użytkowników końcowych.
Młodszy Specjalista / Młodsza Specjalistka ds. IT
Salesandbetter Sp. z o.o.
⚲ Warszawa
Wymagania
- Python
- Docker
Nasze wymagania: biegła znajomość inżynierii danych oraz uczenia maszynowego, mile widziane co najmniej 2-3 lata doświadczenia w pracy z językiem Python w środowisku chmury obliczeniowej. zaawansowana znajomość języka Python: biegła obsługa struktur danych, programowanie asynchroniczne i pobieranie danych za pośrednictwem interfejsów (API). doświadczenie w bazach danych wektorowych i pobieraniu danych: efektywne wyszukiwanie danych tekstowych przetworzonych matematycznie (wektorów). praktyczne doświadczenie w przetwarzaniu języka (NLP) z wykorzystaniem nowoczesnych modeli językowych (np. transformatorów). umiejętność automatyzacji (DevOps): konfigurowanie codziennych synchronizacji danych w środowisku chmury i konteneryzacja (Docker). precyzja w analizowaniu i przetwarzaniu złożonych struktur dokumentów, unikanie duplikatów w bazie danych. doświadczenie w hostingu w chmurze: zarządzanie instancjami wirtualnymi (Linux) i ich zabezpieczanie. samodzielne rozwiązywanie problemów związanych z przerwaniami połączenia lub zmianami w zewnętrznych źródłach danych. znajomość języka angielskiego. Mile widziane: wykształcenie inżynierskie z zakresu informatyki (licencjat/magister) doświadczenie na podobnym stanowisku znajmomość języka niemieckiego O projekcie: Projekt dotyczy stworzenia i automatyzacji skalowalnego procesu przetwarzania danych tekstowych w języku polskim, który ma wspierać zapytania oparte na sztucznej inteligencji. Zakres obowiązków: Tworzenie i automatyzacja skalowalnego potoku danych do przetwarzania dużych ilości danych tekstowych (polskich) oraz udostępnianie ich dla zapytań opartych na sztucznej inteligencji. Pipeline danych: opracowanie zautomatyzowanego procesu, który codziennie pobiera, filtruje i przetwarza nowe publiczne zestawy danych. Integracja backendowa: zapewnienie interfejsu, za pośrednictwem którego środowisko edytora wysyła zapytania do bazy danych i otrzymuje wyniki w ciągu milisekund. Zarządzanie infrastrukturą: monitorowanie kosztów chmury i optymalizacja mocy obliczeniowej dla codziennych cykli aktualizacji. Oferujemy: ofertujemy stabilne zatrudnienie długoletnią współpracę możliwość dokształcania i rozwoju pracę przy innowacyjnym projekcie pracę w międzynarodowym zespole
Lead Data Engineer (Spark)
Addepto
⚲ Warszawa, Wrocław, Białystok, Kraków, Gdańsk, Katowice, Poznań
21 000 - 31 920 PLN (B2B)
Wymagania
- Python
- SQL
- Spark
- AWS
- Airflow
- Docker
- Cloudera
- Kafka
- Kubernetes (nice to have)
- Hadoop (nice to have)
- NiFi (nice to have)
- Java (nice to have)
- Scala (nice to have)
- Trino (nice to have)
- Hudi (nice to have)
O projekcie: Addepto is a leading AI consulting (addepto.com/ai-consulting/) and data engineering (addepto.com/data-engineering-services/) company that builds scalable, ROI-focused AI solutions for some of the world's largest enterprises and pioneering startups, including Rolls Royce, Continental, Porsche, ABB, and WGU. With an exclusive focus on Artificial Intelligence and Big Data, Addepto helps organizations unlock the full potential of their data through systems designed for measurable business impact and long-term growth. The company's work extends beyond client engagements. Drawing from real-world challenges and insights, Addepto has developed its own product - ContextClue - and actively contributes open-source solutions to the AI community. This commitment to transforming practical experience into scalable innovation has earned Addepto recognition by Forbes as one of the top 10 AI consulting companies worldwide. As part of KMS Technology, a US-based global technology group, Addepto combines deep AI specialization with enterprise-scale delivery capabilities—enabling the partnership to move clients from AI experimentation to production impact, securely and at scale. Beyond client projects, we have developed our own product offerings born from real-life client insights and challenges. We are also actively releasing open-source solutions to the community, transforming practical experience into tools that benefit the broader AI ecosystem. This commitment to scalable innovation, proven ROI delivery, and knowledge sharing has earned us recognition by Forbes as one of the top 10 AI consulting companies worldwide. As a Lead Data Engineer, you will have the exciting opportunity to work with a team of technology experts on challenging projects across various industries, leveraging cutting-edge technologies. Here are some of the projects we are seeking talented individuals to join: - Design and development of the platform for managing vehicle data for global automotive company. This project develops a shared platform for processing massive car data streams. It ingests terabytes of daily data, using both streaming and batch pipelines for near real-time insights. The platform transforms raw data for data analysis and Machine Learning, this empowers teams to build real-world applications like digital support and smart infotainment and unlocks data-driven solutions for car maintenance and anomaly detection across the organization. - Design and development of a universal data platform for global aerospace companies. This Azure and Databricks powered initiative combines diverse enterprise and public data sources. The data platform is at the early stages of the development, covering design of architecture and processes as well as giving freedom for technology selection. Discover our perks and benefits: - Work in a supportive team of passionate enthusiasts of AI & Big Data. - Engage with top-tier global enterprises and cutting-edge startups on international projects. - Enjoy flexible work arrangements, allowing you to work remotely or from modern offices and coworking spaces. - Accelerate your professional growth through career paths, knowledge-sharing initiatives, language classes, and sponsored training or conferences, including a partnership with Databricks, which offers industry-leading training materials and certifications. - Choose your preferred form of cooperation: B2B or a contract of mandate, and enjoy 20 fully paid days off. - Participate in team-building events and utilize the integration budget. - Celebrate work anniversaries, birthdays, and milestones. - Access medical and sports packages, eye care, and well-being support services, including psychotherapy and coaching. - Get full work equipment for optimal productivity, including a laptop and other necessary devices. - With our backing, you can boost your personal brand by speaking at conferences, writing for our blog, or participating in meetups. - Experience a smooth onboarding with a dedicated buddy, and start your journey. Wymagania: 🎯 What you'll need to succeed in this role: - 7+ years of proven commercial experience in implementing, developing, or maintaining Big Data systems. - Strong programming skills in Python or Java/Scala: writing a clean code, OOP design. - Experience in designing and implementing data governance and data management processes. - Familiarity with Big Data technologies like Spark, Cloudera, Kafka, Airflow, NiFi, Docker, Kubernetes, and Iceberg. - Proven expertise in implementing and deploying solutions in cloud environments (with a preference for AWS). - Excellent understanding of dimensional data and data modeling techniques. - Excellent communication skills and consulting experience with direct interaction with clients. - Ability to work independently and take ownership of project deliverables. - Master’s or Ph.D. in Computer Science, Data Science, Mathematics, Physics, or a related field. - Fluent English (C1 level) is a must. Codzienne zadania: - Design and develop scalable data management architectures, infrastructure, and platform solutions for streaming and batch processing using Big Data technologies like Apache Spark, Hadoop, Iceberg. - Design and implement data management and data governance processes and best practices. - Contribute to the development of CI/CD and MLOps processes. - Develop applications to aggregate, process, and analyze data from diverse sources. - Collaborate with the Data Science team on data analysis and Machine Learning projects, including text/image analysis and predictive model building. - Develop and organize data transformations using DBT and Apache Airflow. - Translate business requirements into technical solutions and ensure optimal performance and quality.
Junior Data Engineer (Databricks)
Addepto
⚲ Warsaw, Gdańsk, Wrocław, Kraków, Białystok, Poznań, Katowice
8 400 - 15 120 PLN (B2B)
Wymagania
- Python
- SQL
- ETL
- Azure
- Airflow
- Databricks
- Spark
- Docker
- CI/CD (nice to have)
- Kubernetes (nice to have)
- Kafka (nice to have)
- Power BI (nice to have)
- Dagster (nice to have)
- dbt (nice to have)
O projekcie: Addepto is a leading AI consulting and data engineering company that builds scalable, ROI-focused AI solutions for some of the world’s largest enterprises and pioneering startups, including Rolls Royce, Continental, Porsche, ABB, and WGU. With our exclusive focus on Artificial Intelligence and Big Data, we help organizations unlock the full potential of their data through systems designed for measurable business impact and long-term growth. Beyond client projects, we have developed our own product offerings born from real-life client insights and challenges. We are also actively releasing open-source solutions to the community, transforming practical experience into tools that benefit the broader AI ecosystem. This commitment to scalable innovation, proven ROI delivery, and knowledge sharing has earned us recognition by Forbes as one of the top 10 AI consulting companies worldwide. As a Junior Data Engineer, you will have the exciting opportunity to work with a team of technology experts on challenging projects across various industries, leveraging cutting-edge technologies. Here are some of the projects we are seeking talented individuals to join: - Design and development of a universal data platform for global aerospace companies. This Azure and Databricks powered initiative combines diverse enterprise and public data sources. The data platform is at the early stages of the development, covering design of architecture and processes, as well as giving freedom for technology selection. - Data Platform Transformation for energy management association body. This project addressed critical data management challenges, boosting user adoption, performance, and data integrity. The team is implementing a comprehensive data catalog, leveraging Databricks and Apache Spark/PySpark, for simplified data access and governance. Secure integration solutions and enhanced data quality monitoring, utilizing Delta Live Table tests, established trust in the platform. The intermediate result is a user-friendly, secure, and data-driven platform, serving as a basis for further development of ML components. - Design of the data transformation and following data ops pipelines for global car manufacturer. This project aims to build a data processing system for both real-time streaming and batch data. We’ll handle data for business uses like process monitoring, analysis, and reporting, while also exploring LLMs for chatbots and data analysis. Key tasks include data cleaning, normalization, and optimizing the data model for performance and accuracy. Discover our perks and benefits: - Work in a supportive team of passionate enthusiasts of AI & Big Data. - Engage with top-tier global enterprises and cutting-edge startups on international projects. - Enjoy flexible work arrangements, allowing you to work remotely or from modern offices and coworking spaces. - Accelerate your professional growth through career paths, knowledge-sharing initiatives, language classes, and sponsored training or conferences, including a partnership with Databricks, which offers industry-leading training materials and certifications. - Choose your preferred form of cooperation - B2B or a contract of mandate - and enjoy 20 fully paid days off. - Participate in team-building events and utilize the integration budget. - Celebrate work anniversaries, birthdays, and milestones. - Access medical and sports packages, eye care, and well-being support services, including psychotherapy and coaching. - Get full work equipment for optimal productivity, including a laptop and other necessary devices. - With our backing, you can boost your personal brand by speaking at conferences, writing for our blog, or participating in meetups. - Experience a smooth onboarding with a dedicated buddy, and start your journey in our friendly, supportive, and autonomous culture. Wymagania: What you’ll need to succeed in this role: - At least 1 year of proven commercial experience developing, or maintaining Big Data systems. - Hands-on experience with Big Data technologies, including Databricks, Apache Spark, Airflow, and DBT. - Strong programming skills in Python: writing a clean code, OOP design. - Experience in designing and implementing data governance and data management processes. - Experience implementing and deploying solutions in cloud environments (with a preference for Azure). - Practical knowledge of DevOps practices, including designing and maintaining CI/CD pipelines for data and ML workflows, and Terraform for Infrastructure as Code. - Knowledge of how to build and deploy Power BI reports and dashboards for data visualization. - Excellent understanding of dimensional data and data modeling techniques. - Excellent communication skills and consulting experience with direct interaction with clients. - Ability to work independently and take ownership of project deliverables. - Bachelor’s or Master’s degree in Computer Science, Data Science, Mathematics, Physics, or a related field. Codzienne zadania: - Design scalable data processing pipelines for streaming and batch processing using Big Data technologies like Databricks, Airflow and/or Dagster. - Contribute to the development of CI/CD and MLOps processes. - Develop applications to aggregate, process, and analyze data from diverse sources. - Collaborate with the Data Science team on Machine Learning projects, including text/image analysis and predictive model building. - Develop and organize data transformations using Databricks/DBT and Apache Airflow. - Translate business requirements into technical solutions and ensure optimal performance and quality.
Junior Data Engineer (Azure Data Factory)
Addepto
⚲ Warsaw, Gdańsk, Wrocław, Białystok, Kraków, Poznań, Katowice
7 000 - 12 600 PLN (B2B)
Wymagania
- Python
- SQL
- ETL
- Azure
- Azure Data Factory
- CI/CD (nice to have)
- Kubernetes (nice to have)
- Power BI (nice to have)
- dbt (nice to have)
O projekcie: Addepto is a leading AI consulting (addepto.com/ai-consulting/) and data engineering (addepto.com/data-engineering-services/) company that builds scalable, ROI-focused AI solutions for some of the world's largest enterprises and pioneering startups, including Rolls Royce, Continental, Porsche, ABB, and WGU. With an exclusive focus on Artificial Intelligence and Big Data, Addepto helps organizations unlock the full potential of their data through systems designed for measurable business impact and long-term growth. The company's work extends beyond client engagements. Drawing from real-world challenges and insights, Addepto has developed its own product - ContextClue - and actively contributes open-source solutions to the AI community. This commitment to transforming practical experience into scalable innovation has earned Addepto recognition by Forbes as one of the top 10 AI consulting companies worldwide. As part of KMS Technology, a US-based global technology group, Addepto combines deep AI specialization with enterprise-scale delivery capabilities—enabling the partnership to move clients from AI experimentation to production impact, securely and at scale. As a Junior Data Engineer, you will have the opportunity to support and further develop an Azure-based data integration solution built primarily around Azure Data Factory (ADF). The current environment includes Azure Functions and ingestion components, but daily delivery is strongly centered on ADF pipeline design, orchestration, monitoring, and continuous improvement. The project focuses on expanding and stabilizing the existing data platform, including the warehouse layer (Azure SQL Database or Synapse), optional dbt-based transformations, and occasional Power BI reporting support. You will work closely with the client-side Product Owner and Architect, proactively aligning business needs with technical implementation decisions and ensuring high-quality, scalable solutions. Additionally, you will contribute to Azure DevOps CI/CD pipelines and release processes to maintain reliable deployments across environments. Discover our perks and benefits: - Work in a supportive team of passionate enthusiasts of AI & Big Data. - Engage with top-tier global enterprises and cutting-edge startups on international projects. - Enjoy flexible work arrangements, allowing you to work remotely or from modern offices and coworking spaces. - Accelerate your professional growth through career paths, knowledge-sharing initiatives, language classes, and sponsored training or conferences, including a partnership with Databricks, which offers industry-leading training materials and certifications. - Choose your preferred form of cooperation - B2B or a contract of mandate - and enjoy 20 fully paid days off. - Participate in team-building events and utilize the integration budget. - Celebrate work anniversaries, birthdays, and milestones. - Access medical and sports packages, eye care, and well-being support services, including psychotherapy and coaching. - Get full work equipment for optimal productivity, including a laptop and other necessary devices. - With our backing, you can boost your personal brand by speaking at conferences, writing for our blog, or participating in meetups. Wymagania: What you’ll need to succeed in this role: - At least 2+ years of hands-on experience with Azure Data Factory (must-have), including: pipeline and orchestration design, linked services, datasets, triggers, and parameterization, operational monitoring, troubleshooting, and performance tuning, and deployment-aware ADF development in enterprise environments. - Excellent knowledge of Python (Azure Functions, REST APIs, automation). - Strong SQL skills and solid understanding of data modeling for ETL/ELT and warehouse workloads. - Experience with CI/CD processes in Azure DevOps (pipelines, releases, multi-environment deployments). - Solid understanding of Azure services and cloud-based data solutions (e.g., Azure SQL Database, Azure Key Vault). - Experience with Power BI for occasional dashboarding and reporting. - Experience working with modern development practices and tools. - Consulting mindset with proactive communication and strong stakeholder alignment skills. - Ability to effectively collaborate with Product Owner and Architect during planning and delivery. - Independent and responsible approach to delivering high-quality solutions. - Excellent command of English (at least C1 level). ➕ Nice to have: - Knowledge of dbt. - Experience implementing data warehouses on Azure. Codzienne zadania: - Design, develop, and maintain Azure Data Factory pipelines, including orchestration, parameterization, and trigger management. - Configure and manage linked services and datasets within ADF. - Monitor, troubleshoot, and optimize ADF pipelines to ensure performance and reliability. - Develop and maintain ETL/ELT processes and support the evolution of the data warehouse layer (Azure SQL Database or Synapse). - Translate business requirements into technical solutions in close collaboration with the Product Owner and Architect. - Develop and maintain Python-based components (e.g., Azure Functions, API integrations, automation scripts). - Contribute to CI/CD processes in Azure DevOps, including pipelines, releases, and environment promotion. - Support occasional Power BI reporting and dashboarding needs. - Ensure proactive communication, stakeholder alignment, and visibility of risks and impacts. - Take ownership of assigned tasks and actively contribute to continuous platform improvement.
Elasticsearch Engineer
Antal
⚲ Kraków
30 240 - 38 640 PLN (B2B)
Wymagania
- Elasticsearch
- Google cloud platform
- Hadoop
- DevOps
- Kanban
- Jira
- Confluence
- Scala (nice to have)
- Apache Spark (nice to have)
- JUnit (nice to have)
- TDD (nice to have)
- BDD (nice to have)
- RHEL (nice to have)
- Shell (nice to have)
- Clean Code (nice to have)
- DRY (nice to have)
- Design Patterns (nice to have)
O projekcie: Elasticsearch Engineer - Hybrid work model – 2 days per week from the office in Kraków We are looking for an experienced Elasticsearch Engineer to join an international technology team working on a large-scale platform focused on advanced analytics, Big Data, and cloud-native solutions within the financial services domain. The role is part of a globally distributed setup with teams located across Europe and Asia, operating in an Agile environment with regular releases and strong engineering standards. Project Overview The project focuses on building and maintaining a Big Data and analytics platform that leverages Elasticsearch, AI technologies, and Google Cloud Platform (GCP) to support intelligence-driven analysis and investigation processes. The platform is used by analysts, investigators, and data specialists operating at global scale. The technology stack includes Elasticsearch, Scala, Spark, Hadoop, with an ongoing transition towards cloud-native solutions on GCP, including Kubernetes, Terraform, Ansible, and CI/CD pipelines. What We Offer - Work on a business-critical, large-scale analytics platform - Collaboration with experienced, international engineering teams - Exposure to modern cloud and Big Data technologies - Stable, long-term project with clear engineering standards and processes Wymagania: Required Skills & Experience - Strong hands-on experience with Elasticsearch (acting as an SME) - Good knowledge of Google Cloud Platform (GCP) - Experience with the Hadoop ecosystem - Familiarity with Agile and DevOps practices (Scrum / Kanban) - Experience using Jira and Confluence - Very good communication skills in English (discussion, collaboration, presentations) Nice to Have - Experience with Scala and Apache Spark - Knowledge of JUnit, TDD, BDD - Experience with RHEL / shell scripting / regex - Understanding of Clean Code principles, SOLID, KISS, DRY, and design patterns Codzienne zadania: - Implement, configure, and upgrade Elasticsearch clusters - Provide L2/L3 production support for business users and development teams - Support development pods with troubleshooting, performance tuning, and operational issues - Assist in the migration of Elasticsearch from on-premises environments to the cloud - Collaborate closely with engineers, DevOps, and platform teams in an Agile setup
Ekspertka / Ekspert ds. Modelowania Ryzyka - Data Scientist (k/m)
GRUPA PZU
⚲ Warszawa
Wymagania
- SQL
- Python
- SAS
Nasze wymagania: Min. 4 lata doświadczenia w obszarze analizy i modelowania danych Doświadczenie w budowaniu modeli uczenia maszynowego Doświadczenie w pracy z dużymi bazami danych i biegła znajomość SQL Znajomość Pythona oraz chęć pracy i rozwoju w tym języku Wysokie umiejętności analityczne Wykształcenie wyższe o profilu ekonometrycznym, matematycznym lub pokrewnym Znajomość języka angielskiego Mile widziane: Doświadczenie w obszarze ubezpieczeń kontrolingu lub aktuariatu cenowego Praktyczna umiejętność pracy z SASem Zakres obowiązków: Tworzenie i utrzymywanie taryf w obszarze ubezpieczeń korporacyjnych Tworzenie i utrzymywanie modeli uczenia maszynowego będących podstawą kalkulacji składki ubezpieczeniowej Budowa modeli predykcyjnych na danych tabularycznych oraz analiza tekstu Tworzenie oraz rozwój narzędzi modelarskich i analitycznych Udział w projektach biznesowych istotnych dla obszaru ubezpieczeń korporacyjnych Automatyzacja procesów związanych z danymi i monitorowaniem ryzyka Praca mająca bezpośrednie przełożenie na biznes i wynik PZU SA Oferujemy: Praca w modelu hybrydowym z elastycznymi godzinami rozpoczęcia Umowa o pracę Pracowniczy Program Emerytalny w wysokości 7% opłacany przez pracodawcę Opieka medyczna w PZU Zdrowie Zniżka pracownicza do 50% na ubezpieczenia (m. in. PZU DOM, PZU AUTO) Platforma benefitowa m.in. karta sportowa, bilety do kin i teatrów, vouchery zakupowe Elastyczna oferta grupowego ubezpieczenie na życie w wielu wariantach Dostęp do bazy szkoleń cyfrowych oraz nowoczesnych platform edukacyjnych Szkolenia i programy rozwojowe dla pracowników i menedżerów Programy i działania wellbeingowe dla pracowników 4 godziny wolnego w dniu urodzin (do odebrania w miesiącu, w którym obchodzisz urodziny) 2 dni w roku na wolontariat pracowniczy (możesz zrealizować autorski projekt wolontariacki lub wziąć udział w akcji zorganizowanej przez Fundację PZU) Możliwość rozwoju pasji sportowych w ramach 18 sekcji PZU Sport Team (od Badmintona – po Żeglarstwo) Najbardziej zielone biuro w Warszawie (PZU Park) ze strefami relaksu i siłownią
Database Administration & Operations Tech Lead
AVENGA (Agencja Pracy, nr KRAZ: 8448)
⚲ Warszawa
25 200 - 31 920 PLN (B2B)
Wymagania
- Oracle
- MSSQL
- MySQL
- MongoDB
- PostrgeSQL
- DB2
O projekcie: Database Administration & Operations is responsible for the deployment, management, optimisation, and protection of enterprise database platforms. The area ensures high availability, integrity, performance, and security of structured and unstructured data supporting critical business applications. It spans relational (SQL) and NoSQL platforms, addressing challenges such as version upgrades, patch management, capacity planning, data lifecycle management, and ensuring compliance with regulatory and security standards. The function also maintains robust backup, recovery, and disaster recovery capabilities, while continuously improving database reliability, automation, and operational maturity. People who is willing / capable mentoring junior colleagues, and having experience in big cloud migration project is a plus. Wymagania: - Expert-level knowledge of MS SQL, MySQL, Progress, MongoDB, Oracle, DB2 (administration, backup, patching). - Strong performance tuning experience (execution plans, indexing, wait events, AWR/ASH, engine metrics). - Deep understanding of HA/DR architectures (AlwaysOn, Data Guard, log shipping, replication). - Expertise with TLS, certificates, and driver configuration for secure DB endpoints. - Skills in capacity planning, forecasting, and data growth modelling. - Knowledge of backup technologies, snapshot-based recovery, and cross-site failover. - Understanding of data security, encryption, auditing, masking, and privilege management. - Familiarity with cloud-managed database services (Azure/GCP/AWS equivalents if applicable). - Observability/monitoring expertise for DB performance, alerting, and telemetry. - Experience with license compliance and vendor-specific entitlement rules. - Expertise in migration planning, version upgrades, and platform consolidation. - Ability to mentor and develop L2 database analysts; strong technical leadership. - Scripting capabilities (PowerShell, Python, Bash) for automation of routine DB operations. - People management skills for leading the DB admin L2 analysts. Codzienne zadania: - Lead the management, deployment, configuration, and optimisation of enterprise database instances (DB technologies used: MS SQL, MySQL, Progress, MongoDB, Oracle, DB2). - Perform advanced database backup, recovery, and disaster recovery procedures; ensure validated DR (BackUp Restore) plans. - Conduct performance tuning: execution plans, indexing strategy, query analysis, wait events, resource bottlenecks. - Stay Up to Date with database high availability solutions (clustering, replication, log shipping, AlwaysOn, Data Guard) although these are rarely used if at all in the moment. - Drive version upgrades, patching, vulnerability remediation, and database platform lifecycle management. - Ensure data integrity, security, and compliance across all database environments. - Oversee encryption (TLS, TDE), secure driver configurations, and secure connectivity. - Lead incident response and deep troubleshooting of database service outages and performance degradations. - Collaborate with application, infrastructure, storage, and cloud teams to ensure seamless data availability. - Maintain documentation of database architectures, operational procedures, and support models. - Conduct regular database capacity, utilisation, and health reviews; propose optimisation initiatives. - Support and remediate findings from audits, compliance checks, and license reviews. - Participate in governance, reporting, service reviews, and operational improvement initiatives. - Provide technical leadership, mentoring, and guidance to L2 Database Analysts. Organize an On-Call rota for this area. - Manage vendor escalations and coordinate with third‑party support. - Lead database migration and consolidation activities (on‑prem → cloud or cross-platform). - Support business continuity planning, DR test (BackUp Restoration) execution, and post-test reporting. - Oversee secure decommissioning of end-of-life database hardware and data sanitisation processes.
Data Migration Engineer
Link Group
⚲ Remote
23 520 - 28 560 PLN (B2B)
Wymagania
- Python
- Apache Airflow
- Data migration
- Data engineering
- Insurance domain data models
- ETL processes
- Analytical skills
- Troubleshooting
- EIS Insurance platform (nice to have)
- Insurance system transformation (nice to have)
O projekcie: Role Overview We are looking for a Data Migration Engineer to support a large-scale insurance platform transformation project. The role focuses on designing and executing data migration processes, ensuring the accurate transfer of complex insurance data structures from legacy systems to a new platform. Wymagania: - Strong experience with Python - Hands-on experience with Apache Airflow - Experience in data migration or data engineering projects - Strong understanding of insurance domain data models (policy, quote, endorsement, coverage etc.) - Experience working with ETL processes and large datasets - Strong analytical and troubleshooting skills Nice to Have - Experience with EIS insurance platform - Experience in large insurance system transformations Codzienne zadania: - Design and implement data migration pipelines using Python and Apache Airflow - Extract, transform and load insurance data from legacy systems into the new platform - Work with complex insurance data models including policy, quote, endorsement, coverage and policy transactions - Ensure data quality, consistency and validation throughout the migration process - Collaborate with architects, business analysts and engineering teams to define migration logic - Troubleshoot data issues and optimize migration workflows
Data Migration Architect
Link Group
⚲ Remote
30 240 - 36 960 PLN (B2B)
Wymagania
- Python
- Apache Airflow
- Data migration
- Data engineering
- Insurance domain data models
- ETL processes
- Analytical skills
- Troubleshooting
- EIS Insurance platform (nice to have)
- Insurance system transformation (nice to have)
O projekcie: Role Overview We are looking for a Data Migration Architect to support a large-scale insurance platform transformation project. The role focuses on designing and executing data migration processes, ensuring the accurate transfer of complex insurance data structures from legacy systems to a new platform. Wymagania: - Strong experience with Python - Hands-on experience with Apache Airflow - Experience in data migration or data engineering projects - Strong understanding of insurance domain data models (policy, quote, endorsement, coverage etc.) - Experience working with ETL processes and large datasets - Strong analytical and troubleshooting skills Nice to Have - Experience with EIS insurance platform - Experience in large insurance system transformations Codzienne zadania: - Design and implement data migration pipelines using Python and Apache Airflow - Extract, transform and load insurance data from legacy systems into the new platform - Work with complex insurance data models including policy, quote, endorsement, coverage and policy transactions - Ensure data quality, consistency and validation throughout the migration process - Collaborate with architects, business analysts and engineering teams to define migration logic - Troubleshoot data issues and optimize migration workflows
Data Integration Expert
DCG Sp. z o.o
⚲ Warszawa
30 240 - 35 280 PLN (B2B)
Wymagania
- Microsoft
- SQL
- ETL
- SSIS
- OLAP
- Azure Synapse (nice to have)
- SSAS (nice to have)
- SSRS (nice to have)
O projekcie: Jako firma rekrutacyjna jesteśmy świadomi, że każdy solidny biznes napędzają ludzie z odpowiednio dopasowanymi kompetencjami. Nasz styl zarządzania i partnerskie podejście pozwalają nam na elastyczne dostosowanie się do Twoich potrzeb i zapewniają pełne wsparcie podczas współpracy. W związku z ciągłym rozwojem i dużą liczbą projektów rekrutacyjnych, jakie prowadzimy dla naszych Partnerów, szukamy osoby na stanowisko: Data Integration Expert Od czasu do czasu wizyty w biurze w Warszawie. Wymagania: - 5 - 7 lat doświadczenia w Data Integration - Bardzo dobre doświadczenie w SSIS lub Azure Data Factory - Doświadczenie w projektach integracji i transformacji danych (ELT / ETL) - Wiedza z zakresu budowy i zasilania kostek OLAP / SSAS - Doświadczenie z SQL (T-SQL, procedury, optymalizacja zapytań) - Doświadczenie pracy z hurtowniami danych (fakty, wymiary, Data Marty) - Zdolność szybkiego odnalezienia się w istniejącym środowisku - Samodzielność i nastawienie na dowiezienie efektu w krótkim czasie Mile widziane: - Znajomość Azure Synapse - Doświadczenie z SSAS / SSRS - Praca w środowiskach o wysokiej krytyczności danych Codzienne zadania: - Analiza istniejących procesów integracyjnych i transformacji danych - Dokończenie oraz stabilizację rozpoczętych projektów ETL / ELT - Rozwój i modyfikację pakietów SSIS lub pipeline'ów ADF - Optymalizacja wydajności ładowań danych - Współpraca z zespołem IT i biznesem przy doprecyzowaniu logiki transformacji - Współpraca przy utrzymaniu i rozwoju kostek OLAP (SSAS) - Wsparcie w przygotowaniu danych pod raportowanie (SSRS / Power BI) - Identyfikacja ryzyk i wąskich gardeł w architekturze
DWH Developer / Data Engineer
Netcredit
⚲ Warszawa, Wola
Wymagania
- SQL
- PostgreSQL
- Python
- dbt
- Apache Airflow
- Docker
- Git
- Jenkins
- Bash
- Jinja2
- Apache Superset / Preset BI
- Data Lake
- CI / automatyzacja testów
Nasze wymagania: Min. rok doświadczenia w pracy z data lake i/lub data streaming Swobodne programowanie w Pythonie Bardzo dobra znajomość SQL (PostgreSQL) Budowanie i utrzymywanie pipeline’y ETL / ELT Rozumienie cyklu życia modeli Machine Learning i umiejętność współpracy z zespołem Data Science Znajomość praktyki CI i automatyzacji testów Język angielski na poziomie min. C1 Mile widziane: Doświadczenie z dbt oraz rozwiązaniami headless BI O projekcie: Buduj systemy danych, które podejmują decyzje kredytowe w czasie rzeczywistym. Szukamy osoby, która nie tylko buduje pipeline’y, ale rozumie, że dane = realne decyzje biznesowe. U nas Twoje rozwiązania będą bezpośrednio wpływać na automatyczne decyzje kredytowe i rozwój nowych produktów finansowych. Jeśli chcesz mieć realny wpływ, end-to-end ownership i współpracować blisko z zespołami Data Science, Ryzyka Kredytowego oraz Produktu — to jest rola dla Ciebie! Zakres obowiązków: Projektowanie i rozwój nowoczesnej architektury data lake + ELT z produkcyjnych baz danych Budowa wydajnych, skalowalnych pipeline’ów danych (ETL/ELT, automatyzacja, monitoring) Rozwój modeli transformacji danych w SQL / dbt Współtworzenie produkcyjnych API wspierających decyzje kredytowe oparte na danych Implementacja zaawansowanych mechanizmów data quality i walidacji danych Automatyzacja procesów decyzyjnych dla nowych produktów finansowych Ścisła współpraca z zespołem Data Science (wdrażanie modeli ML) i Ryzyka Kredytowego Realna odpowiedzialność za obszar data engineering w rozwijanych rozwiązaniach, bo to nie jest rola „od pipeline’ów” - to rola, w której dane napędzają biznes Oferujemy: Realny wpływ na decyzje kredytowe i strategie ryzyka Pracę przy produktach, które rosną dynamicznie i wymagają skalowalnej architektury danych Dużą autonomię i odpowiedzialność end-to-end Nowoczesne narzędzia i otwartość na usprawnienia technologiczne Współpracę z doświadczonymi zespołami Data Science i Engineering Umowę o pracę lub B2B Pracę hybrydową Budżet rozwojowy (szkolenia, konferencje) Dodatkowy dzień wolny z okazji urodzin Kulturę współpracy i realnego wpływu — bez korporacyjnej warstwy pośredniej
Collibra Data Quality Product Owner / Data Analyst
Verita HR
⚲ Remote
25 200 - 32 760 PLN (B2B)
Wymagania
- Collibra Data Quality
- Business analysis
- Product Owner
- Data Quality
- Data Governance
- Power BI (nice to have)
O projekcie: 📍 Client and contract: automotive industry client | B2B 🗣️ Recruitment: phone screen with our recruiter + 2 on-line meetings with hiring managers 🗺️ 100% remote work from anywhere within the EU Verita HR is an international company providing recruitment support within #Fintech, #Finance and #Banking market in EMEA. We connect the most innovative organizations with the best people in the market. We conduct systematic market research, which allows our Digital Teams to be a step ahead of the competition. You will join a data governance and data quality initiative focused on improving how data quality is monitored across the organization using Collibra Data Quality. The role combines product ownership with hands-on work around defining and implementing data quality scorecards. You will collaborate with data owners, stewards, and technical teams to build practical controls that support reliable data and can be maintained by internal teams over time. The position also includes helping shape the direction of the data quality capability, including prioritization and effort estimation. The project is delivered within a distributed data governance program, and consultants must be located within the EU due to regulatory requirements. What's in it for you? - Long-term project with a well-known automotive industry client - 100% remote work from anywhere within the EU - Competitive compensation on a B2B contract - Flexible working hours - Private healthcare and a Multisport card Wymagania: - Solid experience working with Collibra Data Quality, especially building or configuring data quality scorecards. - Strong understanding of data profiling, data quality rules, thresholds, and monitoring metrics. - Experience working within data governance environments alongside data owners and stewards. - Ability to train and support business and technical users. - Experience acting in a Product Owner or similar coordination role, including backlog prioritization. - Familiarity with cost estimation or pricing models related to platform capabilities or services. - Comfortable working in large enterprise environments with multiple stakeholders. Nice to have: - Knowledge of Collibra Data Catalog or metadata management tools. - Experience with Power BI or other tools used to visualize data quality metrics. Codzienne zadania: - Design and implement Collibra Data Quality scorecards, including rules, thresholds, and monitoring metrics. - Define data profiling approaches and data quality controls aligned with business needs. - Work with data owners and stewards to translate requirements into practical data quality checks. - Act as Product Owner for the data quality capability, supporting backlog prioritization and stakeholder alignment. - Contribute to roadmap planning and effort estimation related to new data quality initiatives. - Run knowledge-sharing sessions and workshops to help internal teams understand and maintain scorecards. - Prepare documentation and guidelines that enable teams to manage the solution independently. - Provide occasional platform support, such as configuration adjustments or troubleshooting.
Analityczka / Analityk ds. Modelowania Ryzyka - Data scientist (k/m)
GRUPA PZU
⚲ Warszawa
Wymagania
- SQL
- Python
- SAS
Nasze wymagania: Min. 2 lata doświadczenia w obszarze analizy i modelowania danych Doświadczenie w budowaniu modeli uczenia maszynowego Doświadczenie w pracy z dużymi bazami danych i biegła znajomość SQL Znajomość Pythona oraz chęć do pracy i rozwoju w tym języku Wysokie umiejętności analityczne Wykształcenie wyższe o profilu ekonometrycznym, matematycznym lub pokrewnym Znajomość języka angielskiego Mile widziane: Doświadczenie w obszarze ubezpieczeń kontrolingu lub aktuariatu cenowego Praktyczna umiejętność pracy z SASem Zakres obowiązków: Tworzenie i utrzymywanie taryf w obszarze ubezpieczeń korporacyjnych Tworzenie i utrzymywanie modeli uczenia maszynowego będących podstawą kalkulacji składki ubezpieczeniowej Budowa modeli predykcyjnych na danych tabularycznych oraz analiza tekstu Tworzenie oraz rozwój narzędzi modelarskich i analitycznych Udział w projektach biznesowych istotnych dla obszaru ubezpieczeń korporacyjnych Automatyzacja procesów związanych z danymi i monitorowaniem ryzyka Praca mająca bezpośrednie przełożenie na biznes i wynik PZU SA Oferujemy: Praca w modelu hybrydowym z elastycznymi godzinami rozpoczęcia Umowa o pracę Pracowniczy Program Emerytalny w wysokości 7% opłacany przez pracodawcę Opieka medyczna w PZU Zdrowie Zniżka pracownicza do 50% na ubezpieczenia (m. in. PZU DOM, PZU AUTO) Platforma benefitowa m.in. karta sportowa, bilety do kin i teatrów, vouchery zakupowe Elastyczna oferta grupowego ubezpieczenie na życie w wielu wariantach Dostęp do bazy szkoleń cyfrowych oraz nowoczesnych platform edukacyjnych Szkolenia i programy rozwojowe dla pracowników i menedżerów Programy i działania wellbeingowe dla pracowników 4 godziny wolnego w dniu urodzin (do odebrania w miesiącu, w którym obchodzisz urodziny) 2 dni w roku na wolontariat pracowniczy (możesz zrealizować autorski projekt wolontariacki lub wziąć udział w akcji zorganizowanej przez Fundację PZU) Możliwość rozwoju pasji sportowych w ramach 18 sekcji PZU Sport Team (od Badmintona – po Żeglarstwo) Najbardziej zielone biuro w Warszawie (PZU Park) ze strefami relaksu i siłownią
👉Software Engineer (Data Science)
Xebia sp. z o.o.
⚲ Wrocław, Rzeszów, Gdańsk, Warszawa
16 700 - 21 700 PLN netto (B2B)
Wymagania
- Java/Kotlin
- GCP
- SQL
- Data Science
🟣You will be: • developing backend services and applications using Kotlin or Java, • processing large data volumes using Scala and BigQuery, • designing and maintaining data pipelines using Apache Airflow in the Cloud Composer (GCP) environment, • working with Google Cloud Platform services to build scalable and reliable data infrastructure, • preparing and transforming datasets for machine learning processes, • supporting the development and training of AI/ML models, including experimentation and A/B testing, • performing exploratory data analysis and creating visualizations to support business insights and decision-making, • collaborating with cross-functional teams to deliver data-driven applications and AI-enabled solutions. 🟣 Your profile: • 3+ years of experience in Software Engineering, preferably in backend or data-focused systems, • strong knowledge of Kotlin or Java for backend development, • good knowledge of SQL and experience working with large datasets, • experience building data pipelines or data processing workflows, • experience working with cloud platforms (preferably GCP), • basic experience working with data analysis or machine learning workflows, • strong problem-solving skills and ability to work in cross-functional teams, • very good command of English (spoken and written), • openness to occasional on-site team meetings (up to once per quarter) in our Poznań or Warsaw office. 🟣 Nice to have: • experience with Scala, Spark, or other big data processing tools, • experience with Apache Airflow or similar orchestration tools, • familiarity with ML platforms such as Vertex AI, • experience with React or data visualization tools. Work from the European Union region and a work permit are required. 🟣 Recruitment Process: CV review – HR call – Technical Interview – Client Interview – Client Interview II – Hiring Manager call – Decision 🎁 Benefits 🎁 ✍ Development: • development budgets of up to 6,800 PLN, • we fund certifications e.g.: AWS, Azure, • access to Udemy, O'Reilly (formerly Safari Books Online) and more, • events and technology conferences, • technology Guilds, • internal training, • Xebia Upskill. 🩺 We take care of your health: • private medical healthcare, • multiSport card - we subsidise a MultiSport card, • mental Health Support. 🤸♂️ We are flexible: • B2B or employment contract, • contract for an indefinite period.
👉Data Scientist
Xebia sp. z o.o.
⚲ Wrocław, Rzeszów, Gdańsk, Warszawa
20 000 - 25 000 PLN netto (B2B)
Wymagania
- optimization theory
- SQL
- Python
🟣You will be: • designing and developing bidding and optimization strategies for CPC-based auction systems, including advanced objectives such as target ROAS, • optimizing auction mechanisms (ranking, pricing, filtering) to balance platform revenue, advertiser value, and marketplace stability, • building and maintaining offline simulation environments, including counterfactual simulations and auction replay systems, to safely evaluate algorithmic changes, • designing and implementing feedback loops and control mechanisms to ensure stable and robust system behavior under dynamic market conditions, • conducting advanced ecosystem-level analytics on large-scale datasets to identify nonlinear effects, equilibrium points, and systemic risks, • designing, executing, and analyzing complex experiments and causal inference studies, accounting for network effects and long-term system impact, • contributing to the evolution toward smart, autonomous bidding and auction optimization through data-driven initiatives and roadmap development. 🟣 Your profile: • proficiency in Python and SQL, with experience working on large-scale datasets, data analysis, statistical modeling, and algorithm prototyping, • experience working with Big Data environments or distributed data processing systems, • strong quantitative background in Mathematics, Statistics, Data Science, Optimization, Decision Science, or Operations Research, • solid understanding of optimization theory, including linear, nonlinear, constrained, and stochastic optimization, • knowledge of game theory and market mechanisms, particularly in competitive or auction-based environments, • experience designing and implementing algorithms under multiple objectives and constraints, considering ecosystem-wide impact, • strong analytical and system-level thinking, with the ability to reason about feedback loops, system dynamics, and equilibrium behavior and translate business goals into quantitative solutions, • ability to collaborate effectively in cross-functional, international teams and very good command of English (spoken and written). 🟣 Nice to have: • experience with auction systems, bidding strategies, or advertising technology, • familiarity with control theory concepts, such as PID controllers or similar feedback mechanisms, • exposure to causal inference, experimentation platforms, or advanced A/B testing setups, • background in economics, mechanism design, marketplace optimization, or experience working with large-scale decision systems. Work from the European Union region and a work permit are required. 🟣 Recruitment Process: CV review – HR call – Technical Interview – Client Interview – Hiring Manager call – Decision 🎁 Benefits 🎁 ✍ Development: • development budgets of up to 6,800 PLN, • we fund certifications e.g.: AWS, Azure, • access to Udemy, O'Reilly (formerly Safari Books Online) and more, • events and technology conferences, • technology Guilds, • internal training, • Xebia Upskill. 🩺 We take care of your health: • private medical healthcare, • multiSport card - we subsidise a MultiSport card, • mental Health Support. 🤸♂️ We are flexible: • B2B or employment contract, • contract for an indefinite period.
Senior SQL Server DBA
Experis Manpower Group
⚲ Kraków
180 - 200 PLN/h netto (B2B)
Wymagania
- SQL Server HA/DR
- AlwaysOn
- DP 300
- Azure
B2B: 180-200 zł/h netto +VAT Responsibilities: • Provide expert-level administration of Microsoft SQL Server environments hosted on Azure, ensuring high availability, performance, and resilience for critical workloads. • Responsible for installation, tuning, security hardening, backup/restore, and HA/DR strategy, as well as diagnosing and resolving complex issues. • Partner with cloud engineers and architects to design optimal database deployments, automate operational tasks, and maintain strong compliance and audit posture across database platforms. Requirements: • DP 300 (a key certification for this role) • Experience with SQL Server HA/DR, AlwaysOn, migrations, and performance tuning • Azure certifications (AZ 104, AZ 305) • Knowledge of automation (Terraform, PowerShell) • Security awareness (SC 900 / Security+) • Fluency in English Our offer: • Hybrid work, 40% from office in Krakow or Wrocław, 60% home office • Multisport card • Private healthcare • Life insurance • PowerYou – e-learning platform
Senior Hadoop/Big Data Developer
DCG sp. z o.o.
⚲ Gdynia, Warszawa
21 840 - 28 560 PLN (B2B)
Wymagania
- Hadoop
- Spark
- SQL
- Scala
- Yarn
- Sqoop
- Hive
- Impala
- MapReduce
- Oozie
- CI/CD
- Git
- Ansible
- Bamboo
- Jenkins
- Java (nice to have)
- SAFe (nice to have)
Wymagania: - 5+ years of Scala experience - Experience with functional programming techniques & principles – Scala - Experience in working in Spark - Knowledge of Linux Shell Scripting - Knowledge of SQL - Knowledge of Hadoop stack (YARN, Sqoop, Hive, Impala, MapReduce, Oozie, etc.) - Familiar with version control and CI/CD tools (Git, Ansible, Bamboo, Jenkins) Nice to have: - Java experience - Experience with streaming technologies such as Flink and Kafka - Familiar with data analysis - Familiar with SAFe Agile way of working Codzienne zadania: - Work on assigned project tasks within a SAFe Agile environment, collaborating closely with cross-functional teams - Take ownership of development activities across the Hadoop / Big Data ecosystem to deliver high-quality, scalable solutions - Collaborate with team members and stakeholders to ensure alignment, knowledge sharing, and customer-focused delivery - Contribute to end-to-end development processes, including design, implementation, testing, and deployment of Big Data solutions - Ensure that solutions meet functional, performance, and reliability requirements within the project scope - Actively participate in team ceremonies and support continuous improvement within the SAFe framework
Senior Data Engineer - Web Data & Automation
Link Group
⚲ Warszawa
30 000 - 45 000 PLN (PERMANENT)
Wymagania
- Python
- SQL
- Web technologies
- JavaScript
- Linux
- Selenium
- Postman
- Docker
- Cloud platform
- Azure
O projekcie: Senior Data Engineer - Web Data & Automation We are seeking a Data Engineer to join our global team working on innovative ways to collect and deliver unique web-based data that supports critical business decisions. You’ll work on a variety of datasets, build reliable pipelines, and develop tools that enable teams to access high-quality data efficiently. Join a team where creativity, technical expertise, and innovative thinking drive meaningful impact every day! Wymagania: - 5+ years of software development experience, strong in Python and SQL. - Experience with web technologies (HTML, JavaScript, APIs) and Linux. - Familiarity with web scraping tools (Selenium, Scrapy, Postman, XPath). - Knowledge of containerization (Docker) and cloud platforms (AWS or Azure preferred). - Strong problem-solving skills and ability to work independently. Codzienne zadania: - Build and maintain automated pipelines to gather and process web-based data. - Collaborate with teams to understand data needs and ensure timely delivery. - Implement data validation and monitoring systems for accuracy and reliability. - Develop tools to enhance data collection, processing, and infrastructure. - Participate in on-call rotations to support critical data systems.
Senior Data Engineer
Ework Group
⚲ Remote
27 216 - 30 072 PLN (B2B)
Wymagania
- Databricks
- Azure
- Python
- CI/CD
- Spark
O projekcie: 🔹 For our Client from pharmaceutical industry we are looking for Senior Data Engineer🔹 Senior Data Engineer will join the DataCore team. DataCore enables the development and consumption of new and existing data products in a scalable and secure manner – supported by an overarching Data Governance Framework. The responsibility for the data engineer will be to manage the orchestration of application-specific data products and aggregated shared insights. Wymagania: - Extensive experience with Azure, Azure Databricks, and a deep understanding of Azure architecture and services - Strong proficiency in Python programming - Experience in using Spark, including Spark SQL and understanding of how to optimize Spark performance - Automated unit testing and code quality inspection - Experience with CI/CD (GitHub actions / Azure DevOps) - Familiarity with AWS - Educational background in Computer Science, Software Development or Engineering - Working in pharma domain or other regulated area is considered an advantage Codzienne zadania: - Collaborate with cross-functional teams to design, develop and maintain data pipelines and analytics solutions. - Design and build a foundational platform for a modern data lake architecture, optimizing it for scalability, flexibility, and performance. - Develop automated test to ensure data accuracy and quality. - Continuously secure improvement that can make developers on the platform work even more efficiently and act as a sparring partner on use of Azure services for the organisation. - Leverage your expertise in cloud development to design and implement innovative digital solutions focused on delivering business insights and patient care in real time.
Senior Data Engineer
Integral Solutions
⚲ Warszawa
20 160 - 20 160 PLN (B2B)
Wymagania
- ETL
- Oracle
- PostgreSQL
- Google BigQuery
- Kafka
- PUB
- Linux
- Python (nice to have)
- Airflow (nice to have)
- Rust2 (nice to have)
- Apache Spark (nice to have)
O projekcie: - Nazwa stanowiska* Senior Data Engineer - Lokalizacja* Warszawa - Tryb pracy* Hybrydowo - Współpraca* B2B - Stawka* 120zł/h netto B2B + Vat Co oferujemy - Pracę na kontrakcie B2B - Dostęp do systemu myBenefit, w tym prywatne ubezpieczenie medyczne, Multisport i wiele innych - Prezenty na różne okazje - Wydarzenia firmowe Wymagania: Wymagania - Znajomość zasad projektowania i budowy nowoczesnych hurtowni danych, - Znajomość aspektów związanych z budową pipeline-ów zasilających ETL/ELT : - Ekspercka znajomość relacyjnych baz danych Oracle, PostgeSQL (projektowanie, budowa i tuning baz danych), - Znajomość Google BigQuery - Znajomość Informatica Power Center, - Znajomość technologii wykorzystywanych w ramach architektury sterowanej zdarzeniami takie jak np. Kafka, Pub/Sub, - Znajomość systemu operacyjnego Linux na poziomie zaawansowanego użytkownika Mile widziane - AutomatenNow - Python - AirFlow - Rust2 - Apache Spark Codzienne zadania: - Budowa rozwiązań klasy Data Lake i Data Warehouse, obejmujących procesy zasilania danych zarówno online, jak i offline. - Projektowanie oraz implementacja rozwiązań informatycznych odpowiedzialnych za zasilanie, przetwarzanie i integrację danych. - Analiza danych oraz przygotowywanie modeli logicznych i fizycznych zgodnie z wymaganiami biznesowymi. - Optymalizacja i strojenie zapytań SQL w relacyjnych i hurtownianych bazach danych, działających zarówno on‑premises, jak i w środowisku Google Cloud Platform. - Tworzenie oraz utrzymanie przepływów danych (offline i online) z wykorzystaniem narzędzi on‑premises oraz GCP. - Opracowywanie rozwiązań do ładowania danych w Pythonie i Ruście dla danych strukturalnych i niestrukturalnych. - Przygotowywanie datamartów tematycznych opartych o bazy relacyjne oraz kolumnowe. - Tworzenie koncepcji technicznych oraz realizacja implementacji w oparciu o potrzeby i wymagania biznesowe. - Usuwanie incydentów oraz udział w procesach diagnozowania i rozwiązywania problemów związanych z przetwarzaniem danych. - Wspieranie testów i wdrażania zmian w obszarze procesów zasilania danych. - Udzielanie wsparcia w zakresie serwisowania wdrożonego oprogramowania. - Realizacja zadań zgodnie z praktykami DevSecOps.
Senior Data Engineer
Idego Group Sp. z o.o.
⚲ Gdynia, Warszawa, Poznań, Kraków, Wrocław
26 880 - 31 080 PLN (B2B)
Wymagania
- Snowflake
- dbt
- Azure
- Fivetran
- AWS (nice to have)
O projekcie: We are currently embarking on a new project with one of the largest book publishing companies in the world and we are looking for a Senior Data Engineer! What you will be doing: You will be responsible for decommissioning old software and migrating data into new systems (support & direct cooperation with our amazing Data Architect). Recruitment process: - meeting with HR - meeting with Head of Data and Data Architect to check the tech skills & culture fit - decision :) Our perks - work environment with zero micromanagement - we cherish autonomy - 100% remote work (unless you want to work from our HQ Gdynia), recruitment & onboarding - experienced team from 4 to 15+ years in commercial projects - unique memes channel - private medical insurance and sports card - we want you to join our team. We are neither the agency giving you projects from time to time, nor huge corporation where you are a "dev XYZ". At Idego - you matter! Wymagania: - At least 6-7 years of experience in the listed technology stack - Great experience with Snowflake - Experience in DBT - Experience with AWS (preffered) or Azure - Experience in Fivetran (or a similar tool) - Strong communication skills - Fluent spoken and written English - We are looking for someone who brings a positive attitude, a strong work ethic, and a commitment to achieving both individual and team goals.
Senior Applied Data Scientist, Learner Modeling
Instructure Hungary Ltd
⚲ Budapest
17 422 - 23 955 PLN (PERMANENT)
Wymagania
- Machine learning
- Data science
- Bayesian
- Data modeling
- Python
- Applied research
- Temporal modeling
- Longitudinal data
- Sequence modeling
- Probabilistic modeling
- AWS (nice to have)
O projekcie: At Instructure, we believe in the power of people to grow and succeed throughout their lives. Our goal is to amplify that power by creating intuitive products that simplify learning and personal development, facilitate meaningful relationships, and inspire people to go further in their education and careers. We do this by giving smart, creative, passionate people opportunities to create awesome things. Our team builds AI-native capabilities, reusable AI systems, and shared infrastructure that power multiple products and workflows across the platform. We are looking for a Senior Applied Data Scientist to develop and validate knowledge tracing and longitudinal models, defining what “mastery” means operationally, and ensuring the outputs are trustworthy, calibrated, and product-ready for learner and educator experiences. You will partner with AI platform engineers to productionize training and scoring pipelines and to monitor quality in live environments. You will work closely with product, engineering, and research partners to turn advanced AI ideas into reliable product capabilities used at scale. Growth & Impact - In This Role, You’ll Be Expected To In this role, you will define how mastery and progression are modeled, validated, and responsibly surfaced to learners and educators. You will build a core differentiator: scientifically grounded learner intelligence that is stable, interpretable, and production-ready. Why Join Us Join us and help shape the future of education by turning cutting-edge AI into reliable product capabilities. At Instructure, we’re on a mission to help educators and students learn together, anytime, anywhere, and however works best. You’ll join our research-driven team tackling education’s biggest challenges with cutting-edge technology. Our projects have included making sense of unstructured feedback, applying large language models to save teachers’ time and improve student experiences, classifying partner networks for smarter recommendations, and detecting fraud to protect resources for real learners. We value diversity, creativity, and passion, and invest in our teams through mentorship, hack weeks, internal conferences, and a culture where innovation thrives. Here, you’ll have the chance to build the next generation of LMS features that make a real impact on students and teachers, and do it in a collaborative, supportive environment that encourages experimentation and growth. Wymagania: What You’ll Need - 6+ years of experience in applied machine learning, data science, or applied research, with ownership of models shipped into real products - Strong depth in at least one of: knowledge tracing, sequence modeling, probabilistic modeling, temporal modeling, or Bayesian approaches - Experience working with longitudinal data and designing models that remain stable, meaningful, and interpretable over time - Strong Python and ML stack skills, with ability to implement and iterate on modeling pipelines - Strong evaluation instincts: calibration, uncertainty, robustness, interpretability, and validation strategy - Ability to communicate modeling choices, limitations, and uncertainty clearly to technical and non-technical stakeholders It Would Be a Bonus If You Had - Experience with educational measurement, psychometrics, learning science, or adaptive learning systems - Experience building customer-facing learner progress or mastery products - Experience combining structured knowledge representations (skills, standards, concept graphs) with learner models - Experience designing experiments or observational validation strategies for learning impact - Experience partnering with platform teams to run models reliably at scale on AWS Codzienne zadania: - Design and build knowledge tracing and longitudinal learner models that support mastery and progression features surfaced to customers. - Define mastery and progression signals, model targets, and evaluation criteria aligned to learning outcomes and product requirements. - Build robust training and scoring approaches for noisy, incomplete, and evolving learner interaction data. - Lead evaluation for validity, calibration, stability over time, interpretability, and failure modes, not just predictive accuracy. - Partner with engineering to productionize learner models into reliable services, including deployment, monitoring, and iteration loops. - Collaborate with product and learning partners to translate learning theory into scalable product systems and to communicate model behavior and limitations clearly.
Senior Applied AI Engineer, Retrieval and Semantic Systems
Instructure Hungary Ltd
⚲ Budapest
17 422 - 23 955 PLN (PERMANENT)
Wymagania
- Machine learning
- Applied AI
- Python
- API
- LLM
- Data science
- AWS
- Retrieval system
- Vector store selection
O projekcie: At Instructure, we believe in the power of people to grow and succeed throughout their lives. Our goal is to amplify that power by creating intuitive products that simplify learning and personal development, facilitate meaningful relationships, and inspire people to go further in their education and careers. We do this by giving smart, creative, passionate people opportunities to create awesome things. Our team builds AI-native capabilities, reusable AI systems, and shared infrastructure that power multiple products and workflows across the platform. We are looking for a Senior Applied AI Engineer to own retrieval and semantic systems end to end. This role builds and operates production retrieval as a core capability, including the retrieval infrastructure layer (indexing, storage, scaling, cost, and reliability), quality evaluation, and iteration loops that improve relevance over time. You will partner with platform engineers on deployment standards and observability, but you will own retrieval architecture decisions and day-to-day operation. You will work closely with product, engineering, and research partners to turn advanced AI ideas into reliable product capabilities used at scale. Growth & Impact - In This Role, You’ll Be Expected To In this role, you will define retrieval as a core capability across multiple AI products. You’ll establish the standards and evaluation loops that make semantic systems reliable, measurable, and scalable in production. Why Join Us Join us and help shape the future of education by turning cutting-edge AI into reliable product capabilities. At Instructure, we’re on a mission to help educators and students learn together, anytime, anywhere, and however works best. You’ll join our research-driven team tackling education’s biggest challenges with cutting-edge technology. Our projects have included making sense of unstructured feedback, applying large language models to save teachers’ time and improve student experiences, classifying partner networks for smarter recommendations, and detecting fraud to protect resources for real learners. We value diversity, creativity, and passion, and invest in our teams through mentorship, hack weeks, internal conferences, and a culture where innovation thrives. Here, you’ll have the chance to build the next generation of LMS features that make a real impact on students and teachers, and do it in a collaborative, supportive environment that encourages experimentation and growth. Wymagania: What You’ll Need - 6+ years of experience building and shipping production ML or applied AI systems - Proven experience owning a retrieval system in production, including vector store selection and operation - Strong Python engineering skills and experience building services/APIs (for example, FastAPI or similar) - Hands-on experience with embeddings, approximate nearest neighbor search concepts, and retrieval or ranking systems - Experience designing indexing and refresh strategies, including data quality controls and safe backfills - Strong judgment on tradeoffs across relevance, latency, cost, and operational complexity - Strong communication skills and ability to collaborate across engineering, product, and research teams It Would Be a Bonus If You Had - Experience with hybrid retrieval (lexical plus vector), learning-to-rank, or domain-specific reranking - Experience with graph-structured context systems or knowledge graph integration - Experience building evaluation and observability for LLM or retrieval systems (quality drift, failure analysis, regression prevention) - Experience with AWS-native architectures for retrieval and indexing services - Experience in education technology, content, curriculum, or skills modeling Codzienne zadania: - Design, build, and ship production retrieval systems that power AI product capabilities. - Own vector store selection and operation, including scalability, latency, reliability, cost, and multi-tenant design. - Build indexing and refresh pipelines (chunking, embedding generation, backfills, deletes, versioned indices). - Implement semantic retrieval patterns, including embeddings, similarity search, metadata filtering, and reranking. - Define and run retrieval evaluation: gold sets, offline metrics,, slice analysis, drift detection, and regression gates. - Partner with platform engineers on CI/CD, service templates, monitoring, and incident readiness while owning retrieval correctness and evolution.
Senior AI/ML Platform Engineer
Instructure Hungary Ltd
⚲ Budapest
17 422 - 23 955 PLN (PERMANENT)
Wymagania
- Machine learning
- AI
- CI/CD
- orchestration
- AWS
- Python
- RAG
- Data science
- software engineering
- Docker Containers K8s
- Applied AI engineering
- Deployment
- Applied data science
- Software engineering
O projekcie: At Instructure, we believe in the power of people to grow and succeed throughout their lives. Our goal is to amplify that power by creating intuitive products that simplify learning and personal development, facilitate meaningful relationships, and inspire people to go further in their education and careers. We do this by giving smart, creative, passionate people opportunities to create awesome things. Our team builds AI-native capabilities, reusable AI systems, and shared infrastructure that power multiple products and workflows across the platform. We are looking for a Senior AI/ML Platform Engineer to build the foundations that make AI capabilities shippable and reliable at scale. This role owns the platform patterns, deployment workflows, and operational standards that enable teams to move from prototype to production safely and repeatedly. You will work closely with product, engineering, and research partners to turn advanced AI ideas into reliable product capabilities used at scale. Growth & Impact - In This Role, You’ll Be Expected To In this role, you will define how AI services are built and operated across a growing portfolio of products. You will influence standards that directly improve prototype-to-production speed, system reliability, and the ability to ship multiple AI capabilities in parallel. Why Join Us Join us and help shape the future of education by turning cutting-edge AI into reliable product capabilities. At Instructure, we’re on a mission to help educators and students learn together, anytime, anywhere, and however works best. You’ll join our research-driven team tackling education’s biggest challenges with cutting-edge technology. Our projects have included making sense of unstructured feedback, applying large language models to save teachers’ time and improve student experiences, classifying partner networks for smarter recommendations, and detecting fraud to protect resources for real learners. We value diversity, creativity, and passion, and invest in our teams through mentorship, hack weeks, internal conferences, and a culture where innovation thrives. Here, you’ll have the chance to build the next generation of LMS features that make a real impact on students and teachers, and do it in a collaborative, supportive environment that encourages experimentation and growth. Wymagania: What You’ll Need - 7+ years of experience in software engineering, platform engineering, ML engineering, or a closely related role with production ownership - Demonstrated experience designing and operating CI/CD pipelines and release processes for production services - Strong AWS experience (compute, storage, networking, deployment, monitoring) - Experience with containers and modern deployment patterns (Docker required; Kubernetes or equivalent is a plus) - Experience with orchestration and workflow tooling (for example, Airflow, Dagster, Argo, Step Functions, or similar) - Strong operational mindset and experience supporting production systems (debugging, reliability, performance, cost) - Ability to set pragmatic standards, work through ambiguity, and collaborate across engineering, product, and research teams It Would Be a Bonus If You Had - Experience building ML platform components (feature stores, model registries, training pipelines, model serving patterns) - Experience with infrastructure-as-code and environment automation - Experience building evaluation and observability systems for AI services (quality metrics, drift signals, regression gates) - Experience supporting retrieval or LLM-enabled systems in production - Experience mentoring engineers or leading technical standards in a platform-building environment Codzienne zadania: - Build and improve CI/CD workflows for ML/AI services across development, staging, and production environments. - Establish reusable service templates and “golden path” patterns for AI microservices (logging, metrics, tracing, config, secrets, versioning). - Design and operate orchestration patterns for batch and online workloads on AWS. - Define operational standards for AI services: monitoring, alerting, SLOs, runbooks, incident readiness, and rollback strategies. - Partner with applied AI and research teams to productionize prototypes into stable services without slowing iteration velocity. - Influence platform architecture and engineering standards across a growing Applied AI team.
Junior Data Analyst
FINANTEQ
⚲ Lublin
7 500 - 11 500 PLN brutto (UoP)
Wymagania
- ETL
- Data Warehousing
- SQL
Dołącz do projektu w niemieckim banku sektora publicznego. Głównym celem projektu jest ciągłe doskonalenie oraz rozbudowa dwóch platform Data Warehouse, które odgrywają kluczową rolę w przetwarzaniu danych klienta na potrzeby rachunkowości finansowej, zarządzania ryzykiem oraz raportowania regulacyjnego. Oferujemy możliwość łączenia pracy zdalnej z wyjazdami do biur w Niemczech co 4–8 tygodni, gdzie spotkasz się z resztą zespołu. O FINANTEQ: FINANTEQ to firma specjalizująca się w tworzeniu oprogramowania dla bankowości mobilnej. Zespół, liczący około 170 doświadczonych programistów, testerów i analityków biznesowych, współpracuje z czołowymi instytucjami finansowymi w Polsce oraz za granicą. Od 2023 roku FINANTEQ stał się częścią grupy Senacor Technologies AG. Senacor to wiodący dostawca usług doradczych i technologicznych, specjalizujący się w tworzeniu zaawansowanych platform cyfrowych. Grupa działa w branżach takich jak bankowość, ubezpieczenia, motoryzacja oraz sektor publiczny w regionie DACH. Senacor wspiera swoich klientów w zakresie projektowania, planowania i tworzenia oprogramowania, a także w zarządzaniu dużymi programami transformacyjnymi na dużą skalę. Twoje zadania w naszym zespole: • analiza danych na potrzeby platformy Data Warehouse; • prowadzenie kompleksowej analizy biznesowej w projektach realizowanych dla klienta; • zrozumienie i interpretacja wymagań biznesowych, najczęściej związanych z raportowaniem produktów bankowych; • analiza dostępnych źródeł danych oraz ich jakości; • specyfikacja oraz implementacja logiki transformacji danych w platformie typu low-code. Co jest dla nas kluczowe: • minimum roczne doświadczenie zawodowego jako Data Analyst; • wykształcenie wyższe kierunkowe w obszarze IT, matematyka lub fizyka i pokrewne; • chęć udziału w wyjazdach służbowych co 4 - 8 tygodni; • umiejętność analizy procesów biznesowych i identyfikowania potrzeb klienta • doświadczenie z narzędziami do przetwarzania ETL; • umiejętność pracy z narzędziami do zapytań danych, w szczególności SQL; • znajomość modelowania danych i modelowania przepływu danych w Domain-specific Language; • umiejętność posługiwania się językiem angielskim i polskim (komunikatywnie w mowie i piśmie). Mamy dla Ciebie: • możliwość pracy 100% zdalnej, w biurze lub hybrydowej – w zależności od Twoich potrzeb; • umowę o pracę; • atmosferę otwartości i współpracy; • międzynarodowy zespół; • współfinansowanie dla pracowników programu MultiSport, prywatnej opieki medycznej; • szkolenia; • a także imprezy firmowe. Prosimy o przesyłanie CV w j. angielskim. Rozmowy rekrutacyjne są prowadzone w j. angielskim.
GCP Data ETL Engineer
GFT Poland
⚲ Kraków
17 420 - 26 710 PLN (PERMANENT)
Wymagania
- ETL
- GCP
- BigQuery
- Python
O projekcie: Why join GFT? You will work with and learn from top IT experts. You will join a crew of experienced engineers: 60% of our employees are senior level. You will work in a stable company (32 years on the market) in demanding and challenging projects for the biggest financial institutions in the world. We are accelerating the development of data tooling to digitize the collection, enrichment, and visualization of Key Risk Indicators (KRIs), with a strong focus on trading controls and risk reporting. Over the past years, multiple trusted data sources have been integrated into a BigQuery-based data warehouse on Google Cloud. The environment delivers reliable, actionable insights and supports automated risk governance reporting through standardized dashboards and visualizations. To support the next phase of growth, we are expanding the delivery team and looking for a GCP Data ELT Engineer to join the programme. We offer you: - Working in a highly experienced and dedicated team - Competitive salary and extra benefit package that can be tailored to your personal needs (private medical coverage, sport & recreation package, lunch subsidy, life insurance, etc.) - Contract of employment - On-line training and certifications fit for career path - Regular social events - Access to e-learning platform Wymagania: - Commercial experience as a Data Engineer / ELT Engineer - Strong hands-on experience with GCP (especially BigQuery) - Experience building and optimizing ELT/ETL pipelines - Good knowledge of SQL and data modeling - Experience working in Agile/Scrum environments - Familiarity with JIRA and collaborative delivery workflows - Strong communication skills and team-oriented mindset - Very good English (spoken and written) Nice to have - Experience in financial services or risk/reporting environments - Knowledge of data visualization/reporting solutions - Understanding of governance and data quality frameworks Codzienne zadania: - Designing and developing ELT pipelines on Google Cloud Platform - Integrating and transforming data from multiple trusted sources into BigQuery - Delivering work packages and JIRA tasks within an agile framework - Collaborating closely with internal data and engineering teams - Participating in agile ceremonies (planning, stand-ups, retrospectives, code reviews) - Ensuring data quality, performance, and scalability of solutions - Supporting the development of data models that enable analytics and reporting - Contributing to dashboards and visualization data layers
Data Scientist
Lingaro
⚲ Warsaw
15 750 - 19 500 PLN (B2B)
Wymagania
- Data Science
- Machine learning
- Forecasting (nice to have)
- Python (nice to have)
- SQL (nice to have)
- Cloud computing (nice to have)
- Databricks (nice to have)
- GCP (nice to have)
- Azure (nice to have)
- OOP (nice to have)
- MLOps (nice to have)
- R (nice to have)
- Scala (nice to have)
- Deep learning (nice to have)
O projekcie: We offer: - Stable employment. On the market since 2008, 1500+ talents currently on board in 7 global sites. - Workation. Enjoy working from inspiring locations in line with our workation policy. - Great Place to Work® certified employer. - Flexibility regarding working hours and your preferred form of contract. - Comprehensive online onboarding program with a “Buddy” from day 1. - Cooperation with top-tier engineers and experts. - Unlimited access to the Udemy learning platform from day 1. - Certificate training programs. Lingarians earn 500+ technology certificates yearly. - Upskilling support. Capability development programs, Competency Centers, knowledge sharing sessions, community webinars, 110+ training opportunities yearly. - Grow as we grow as a company. 76% of our managers are internal promotions. - A diverse, inclusive, and values-driven community. - Autonomy to choose the way you work. We trust your ideas. - Create our community together. Refer your friends to receive bonuses. - Activities to support your well-being and health. - Plenty of opportunities to donate to charities and support the environment. - Modern office equipment. Purchased for you or available to borrow, depending on your location. Wymagania: - commercial experience with various classical data science and Machine Learning (ML) models (e.g. decision trees, ensemble-based tree models, linear regression etc.) - knowledge of customer analytics concepts or advanced forecasting - model hyperparameter tuning - in an analytical role supporting business will be a plus - fluency in Python, basic working knowledge of SQL - knowledge of specific DS/ML libraries - solid experience in one of the cloud computing platforms (Databricks or GCP or Azure) Nice-to-have: - understanding of Causal machine learning - experience in working with big data and distributed environments would be a plus - commercial experience proven by multiple successful projects in the areas of forecasting would be a big plus - Experience with OOP in Python - Experience with MLOps - familiarity with other languages R, Scala would be a plus General: - basic computer programming skills and familiarity with programming concepts - strong business acumen - experience with deep learning, reinforcement learning or other advanced modeling concepts in classical Data Science problems Codzienne zadania: - Work on end-to-end classification and forecasting use cases: problem framing, data preparation, model development, evaluation and basic deployment support (e.g. demand forecasting, churn prediction) - Explore and clean data; perform EDA to understand data and flag data quality issues. - Engineer features for tabular and timeseries data. - Train, validate, and tune standard ML models (e.g. logistic regression, treebased models, gradient boosting, simple neural nets, classical timeseries models). - Evaluate models with appropriate metrics that have impact on business KPIs. - Build clear visualizations and concise reports to present model results and insights to business stakeholders. - Collaborate with data engineers and AI engineers to bring models into production (batch scoring, APIs, models monitoring, dashboards). - Document data sources, modeling assumptions, and experiment results in a reproducible way (notebooks, reports, wikis).
Data Engineer/ML [M/F]
Stackmine
⚲ Remote
23 520 - 26 880 PLN (B2B)
Wymagania
- Python
- Scala
- SQL
- Azure Data Factory
O projekcie: Projekt realizowany jest w ramach zespołu tworzącego nowoczesne rozwiązania cyfrowe dla branży travel & mobility. Celem jest rozwój produktów wykorzystujących dane i machine learning do usprawnienia doświadczeń podróżnych: od planowania podróży, przez personalizację usług, aż po optymalizację procesów operacyjnych. Wymagania: - min. 5 lat doświadczenia na stanowisku Data Engineer lub ML Engineer, - doświadczenie w pracy w zespołach cross-funkcyjnych w środowisku agile, - doświadczenie w projektowaniu i rozwijaniu oprogramowania (Python, Scala) oraz w obszarze machine learning / NLP, - umiejętność testowania i debugowania komercyjnego oprogramowania, - bardzo dobra znajomość Spark, SQL oraz platform chmurowych, - mile widziana znajomość Databricks, Azure Data Factory oraz Azure ML Studio, - bardzo dobra znajomość języka angielskiego i języka polskiego, - gotowość do podjęcia projektu w przeciągu 2-4 tygodni. Codzienne zadania: - projektowanie, rozwój oraz utrzymanie pipeline’ów danych i ML dla produktów cyfrowych, - współpraca z Data Scientistami przy projektowaniu i budowaniu systemów Machine Learning, - tworzenie i realizacja procesów MLOps, obejmujących trenowanie, walidację i wdrażanie modeli ML, - monitorowanie dostępności, integralności oraz jakości danych, - współpraca z Data Architectami i Data Scientistami w zakresie rozwoju, optymalizacji i utrzymania pipeline’ów danych, - analiza danych oraz komunikowanie kluczowych wniosków interesariuszom biznesowym, - planowanie i realizacja zadań związanych z danymi w ramach user stories, - bliska współpraca z zespołami biznesowymi i zarządczymi w celu identyfikacji potrzeb biznesowych, - wspieranie zespołu w dostarczaniu wysokiej jakości rozwiązań w szybkim tempie, - praca w międzynarodowym, różnorodnym zespole.
Data ETL Engineer
Link Group
⚲ Warszawa
21 000 - 26 880 PLN (B2B)
Wymagania
- SQL
- Python
- Terraform
- Cloud
- ETL
- Vault
- GCP
- PUB
- Cloud Composer
- CD pipelines
- BigQuery (nice to have)
- SFTP (nice to have)
- CSV (nice to have)
- JSON (nice to have)
- XML (nice to have)
- Java (nice to have)
O projekcie: We are looking for a Data ETL Engineer to design, build, and maintain scalable BigQuery data models and ETL/ELT pipelines. You will turn raw data into structured, business-ready datasets, optimize performance, ensure data quality, and support secure, cost-efficient cloud solutions. Wymagania: - 3+ years of experience with SQL and complex data transformations (preferably in BigQuery) - Experience building ETL/ELT pipelines and working with Data Vault models - Hands-on experience with GCP services: Cloud Composer/Airflow, Cloud Run, Pub/Sub - Proficiency in Python and Terraform - Familiarity with CI/CD pipelines and version control (Git) - Experience in Agile and DataOps environments - Strong analytical and problem-solving skills Nice to Have - Experience with data ingestion pipelines using Data Fusion/CDAP or similar tools - Experience ingesting data from APIs, SFTP, CSV, JSON, or XML - Knowledge of modern data contract practices - Java development experience (e.g., custom plugins for Data Fusion) - Experience with CI/CD for cloud-based data solutions Codzienne zadania: - Design, build, test, and deploy BigQuery data models and transformations - Develop and maintain ETL/ELT pipelines based on Data Vault principles - Integrate data from multiple sources while ensuring accuracy and consistency - Optimize SQL queries and pipelines for performance, scalability, and cost - Monitor pipelines and troubleshoot failures or performance issues - Implement CI/CD processes and manage code with Git and Jenkins - Collaborate in Agile teams and support knowledge transfer to operations
Data Architect
Idego Group Sp. z o.o.
⚲ Gdynia, Warszawa, Poznań, Kraków, Wrocław
33 600 - 40 320 PLN (B2B)
Wymagania
- dbt
- Snowflake
- Fivetran
- Python
- SQL
- Azure
- Communication skills
- AWS (nice to have)
O projekcie: We are looking for an experienced Data Architect for the world's leading trade publisher, championing brilliant voices from across the globe, bringing their stories, ideas and writing to audiences everywhere. After finalizing their technology, data, and process merger recently, The client is now moving forward with their technology strategy to further enable the organization to bring amazing voices to all. This involves harnessing the power of Cloud solutions, a constantly evolving security stance, the rejuvenation of legacy applications and further improving the way they engage with and manage their strategic partners and suppliers. You will be joining their Data Centre of Excellence who provides expertise and effective solutions to the client. Recruitment process: - meeting with HR - meeting with Head of Data and Data Architect to check the tech skills & culture fit - decision :) Our perks - work environment with zero micromanagement - we cherish autonomy - 100% remote work (unless you want to work from our HQ Gdynia), recruitment & onboarding - experienced team from 4 to 15+ years in commercial projects - unique memes channel - private medical insurance and sports card - we want you to join our team. We are neither the agency giving you projects from time to time, nor huge corporation where you are a "dev XYZ". At Idego - you matter! Wymagania: What skills are we looking for? - Tech stack: Dbt, Snowflake, Fivetran, Python, SQL - Perfect English (UK client) - Experience with AWS (preferred) or Azure - Strong soft skills- we believe soft skills are just as crucial as technical proficiency that's why we're looking for a developer who not only excels in coding but also is a team player with excellent communication skills - Really cool team player - Autonomous Years of Experience: - min 3+ of Data Architect, Data Modelling Roles highly desired. - A minimum of 3–4 years of experience in a Senior Data Engineer role Codzienne zadania: - Make design changes, fixes and improvements as per requirements and outcomes of testing and UAT - Creation of documentation including: Architectural diagram, Data Topology, Data mappings and transformations - Support for Idego engineers engaged in the project
Data Analyst
FINANTEQ
⚲ Lublin
12 000 - 14 000 PLN brutto (UoP)
Wymagania
- ETL
- Data Warehousing
- SQL
Dołącz do projektu w niemieckim banku sektora publicznego. Głównym celem projektu jest ciągłe doskonalenie oraz rozbudowa dwóch platform Data Warehouse, które odgrywają kluczową rolę w przetwarzaniu danych klienta na potrzeby rachunkowości finansowej, zarządzania ryzykiem oraz raportowania regulacyjnego. Oferujemy możliwość łączenia pracy zdalnej z wyjazdami do biur w Niemczech co 4–8 tygodni, gdzie spotkasz się z resztą zespołu. O FINANTEQ: FINANTEQ to firma specjalizująca się w tworzeniu oprogramowania dla bankowości mobilnej. Zespół, liczący około 170 doświadczonych programistów, testerów i analityków biznesowych, współpracuje z czołowymi instytucjami finansowymi w Polsce oraz za granicą. Od 2023 roku FINANTEQ stał się częścią grupy Senacor Technologies AG. Senacor to wiodący dostawca usług doradczych i technologicznych, specjalizujący się w tworzeniu zaawansowanych platform cyfrowych. Grupa działa w branżach takich jak bankowość, ubezpieczenia, motoryzacja oraz sektor publiczny w regionie DACH. Senacor wspiera swoich klientów w zakresie projektowania, planowania i tworzenia oprogramowania, a także w zarządzaniu dużymi programami transformacyjnymi na dużą skalę. Twoje zadania w naszym zespole: • analiza danych na potrzeby platformy Data Warehouse; • prowadzenie kompleksowej analizy biznesowej w projektach realizowanych dla klienta; • zrozumienie i interpretacja wymagań biznesowych, najczęściej związanych z raportowaniem produktów bankowych; • analiza dostępnych źródeł danych oraz ich jakości; • specyfikacja oraz implementacja logiki transformacji danych w platformie typu low-code. Co jest dla nas kluczowe: • minimum 3-letnie doświadczenie zawodowe jako Data Analyst; • wykształcenie wyższe kierunkowe w obszarze IT, matematyka lub fizyka i pokrewne; • chęć udziału w wyjazdach służbowych co 4 - 8 tygodni; • umiejętność analizy procesów biznesowych i identyfikowania potrzeb klienta • doświadczenie z narzędziami do przetwarzania ETL; • umiejętność pracy z narzędziami do zapytań danych, w szczególności SQL; • znajomość modelowania danych i modelowania przepływu danych w Domain-specific Language; • umiejętność posługiwania się językiem angielskim i polskim (komunikatywnie w mowie i piśmie). Mamy dla Ciebie: • możliwość pracy 100% zdalnej, w biurze lub hybrydowej – w zależności od Twoich potrzeb; • umowę o pracę; • atmosferę otwartości i współpracy; • międzynarodowy zespół; • współfinansowanie dla pracowników programu MultiSport, prywatnej opieki medycznej; • szkolenia; • a także imprezy firmowe. Prosimy o przesyłanie CV w j. angielskim. Rozmowy rekrutacyjne są prowadzone w j. angielskim.
Architect/ Senior Data Engineer
Scalo
⚲ Warszawa, Łódź
26 544 - 28 224 PLN (B2B)
Wymagania
- Data Lake
- ETL
- Kafka
- PUB
- Big Data
- Google BigQuery
- Kubernetes
- OpenShift
- Relacyjne Bazy Danych
- PostgreSQL
- Python
- Rust
- Apache Airflow
- Apache Spark (nice to have)
- GCP (nice to have)
- Apache Beam (nice to have)
- Informatica PowerCenter (nice to have)
O projekcie: Cześć! U nas znajdziesz to, czego szukasz - przekonaj się! Co dla Ciebie mamy: - Stabilna współpraca Wyzwania technologiczne i praca z nowoczesnymi rozwiązaniami. Internal Mobility - nie musisz zmieniać firmy, żeby zmienić projekt - Ambasadorzy SCALO Rozwijaj kompetencje techniczne i presalesowe, wpływaj na kierunek rozwoju organizacji i wspieraj innych - Budowanie marki osobistej Twórz wartościowe treści, dziel się wiedzą i doświadczeniem, występuj jako ekspert na wydarzeniach (np. ProMeet) - rozwijaj swoją markę w naturalny sposób - Benefit box Pełna opieka medyczna, karta MultiSport i szeroka oferta Motivizera - wybieraj benefity, które dbają o Twoje zdrowie, energię i codzienny komfort - Program poleceń Poleć znajomego z IT, a resztą zajmiemy się my – on dostaje nowy projekt, Ty bonus. Win–win w najczystszej postaci - Integracje i wydarzenia firmowe Wspólne wyjścia, firmowe eventy i okazje do poznania się poza projektami – bo dobre relacje tworzą najlepsze zespoły Wymagania: Ta oferta jest dla Ciebie, jeśli: - masz doświadczenie w budowaniu i projektowaniu rozwiązań Data Lake i DWH, - znasz aspekty związane z budową pipeline’ów zasilających ETL/ELT, - znasz technologie wykorzystywane w ramach architektury sterowanej zdarzeniami, np. Kafka lub Pub/Sub, - znasz aspekty związane z budową rozwiązań Big Data wykorzystujących możliwości GenAI, - znasz Google BigQuery, - znasz Kubernetes, najlepiej Google Kubernetes Engine i OpenShift, - znasz relacyjne bazy danych Oracle i PostgreSQL w zakresie projektowania, budowy i tuningu baz danych, - biegle znasz Python i/lub Rust, - dobrze znasz Apache Airflow. mile widziane: - znajomość koncepcji data mesh, - znajomość baz wektorowych i grafowych, - znajomość Apache Spark, najlepiej GCP Data Proc, - znajomość Apache Beam, najlepiej GCP DataFlow, - znajomość AutomateNow, - znajomość Informatica PowerCenter. Codzienne zadania: - projektowanie i budowa rozwiązań Data Lake i DWH, - projektowanie pipeline’ów zasilających ETL/ELT, - projektowanie rozwiązań wykorzystujących architekturę sterowaną zdarzeniami, - projektowanie rozwiązań Big Data wykorzystujących możliwości GenAI, - praca z rozwiązaniami opartymi o Google BigQuery, - projektowanie rozwiązań z wykorzystaniem Kubernetes, - praca z relacyjnymi bazami danych Oracle i PostgreSQL, - wykorzystanie Python i/lub Rust w budowie rozwiązań, - praca z Apache Airflow, - praca w modelu hybrydowym, raz w tygodniu z biura w Warszawie.
AWS Data Engineer
Red Global
⚲ Warsaw
Wymagania
- AWS
- ETL
- Python
We’re currently looking for an AWS Data Engineer for a long-term, cloud-based project in the energy sector. Key highlights: • Poland-based, 100% remote • 12 months+, long-term cooperation • Strong focus on AWS, Python, and modern data engineering tools • Hands-on experience with data pipeline development, ETL/ELT processes, and data integrations • Experience with AWS services such as S3, Glue, Lambda, Redshift, and EMR • Software Engineering background preferred We’re targeting AWS Data Engineers, not pure Data Analysts — people who build and maintain scalable data pipelines and data platforms in real production environments. The goal of the project is to design, develop, and optimize robust data pipelines and data architectures that improve data accessibility, support analytics and reporting, and enable scalable integration with the organization’s existing cloud infrastructure. If this sounds relevant, feel free to reply or reach out for details. Best regards, Himanshi
Site Reliability Engineer
DCG sp. z o.o.
⚲ Remote
23 520 - 26 880 PLN (B2B)
Wymagania
- CDN
- Cache
- R
- Infrastructure as Code
- DevOps
- SRE
- Unix
- Linux
- DNS
- TCP
- HTTP
- BGP
- Proxy
- Datadog
- Grafana
- Terraform
- Ansible
- AWS Lambda
- GitLab CI
- Python
- Bash
- API (nice to have)
O projekcie: Site Reliability Engineer Oferujemy: - Prywatną opiekę medyczną - Dofinansowanie karty sportowej - Stałe wsparcie konsultanta - Program rekomendacji pracowników Wymagania: Wymagania: - Minimum 4–5 lat doświadczenia w rolach SysOps / DevOps / SRE - Bardzo dobra znajomość systemów Unix / Linux - Solidne podstawy sieci: DNS, TCP, HTTP, routing (BGP), caching, proxy - Doświadczenie z narzędziami monitoringu i observability (np. Datadog, Grafana) - Doświadczenie z narzędziami DevOps: Terraform, Ansible, AWS Lambda, GitLab CI/CD - Doświadczenie w automatyzacji operacji i pracy ze skryptami (Python, Bash, API) - Umiejętność analizy wydajności systemów i optymalizacji kosztów infrastruktury - Bardzo dobra znajomość języka angielskiego Mile widziane: - Doświadczenie w pracy z technologiami CDN - Doświadczenie w obszarze OTT streaming / QoS dla platform wideo - Doświadczenie w pracy z globalną infrastrukturą dystrybucji treści - Znajomość języka francuskiego - Doświadczenie w pracy w środowiskach międzynarodowych Codzienne zadania: - Zapewnienie dostępności, niezawodności i wydajności platform CDN działających w chmurze i na infrastrukturze bare-metal - Monitorowanie kluczowych wskaźników wydajności (latency, throughput, cache efficiency, error rate) oraz identyfikowanie możliwości optymalizacji - Analiza trendów wykorzystania infrastruktury oraz planowanie pojemności platformy CDN - Udział w wdrożeniach, rolloutach produkcyjnych oraz analizie ruchu OTT w różnych regionach - Udział w obsłudze incydentów, analizie przyczyn problemów (root cause analysis) oraz działaniach poprawiających niezawodność platformy - Rozwój i utrzymanie systemów monitoringu oraz observability dla infrastruktury CDN - Projektowanie dashboardów, alertów, metryk i pipeline'ów logów w celu standaryzacji monitoringu - Rozwój narzędzi automatyzujących operacje i analizę danych infrastrukturalnych - Tworzenie narzędzi wewnętrznych do analizy logów, wizualizacji ruchu, walidacji konfiguracji CDN oraz diagnostyki - Rozwój automatyzacji infrastruktury z wykorzystaniem Infrastructure as Code i pipeline'ów CI/CD
Senior Solution Architect - Cloud Data Platforms
Antal Sp. z o.o.
⚲ Kraków
180 - 220 PLN/h netto (B2B)
Wymagania
- ETL
- Data Warehousing
- AI
- Spark
- Data modeling
- Python
- Scala
Senior Solution Architect – Cloud Data Platforms📍 Location: Cracow | hybrid (6-8 visits per quarter)📝 Contract type: B2B We are looking for an experienced Senior Solution Architect to join an international financial technology team and take ownership of designing scalable and reliable data solutions used across global markets. 🔍 Key Responsibilities: • Lead the design and architecture of end-to-end data and platform solutions aligned with enterprise standards. • Translate business requirements into actionable data and solution designs. • Ensure all technical designs are scalable, cost-effective, and fit for purpose. • Collaborate with DevOps, product owners, and technical teams across regions. • Oversee governance, documentation, architecture reviews and technical assurance. • Design and optimize architectures involving structured and unstructured data, APIs, streaming, and ML/GenAI integration. 🧩 What we expect from you: • 8+ years of experience in Data Engineering / Solution Architecture. • Strong experience with data modeling (logical, physical), data warehousing (Kimball/Inmon), ETL, and non-functional design. • Hands-on experience with Big Data technologies: Spark, Scala/Python, HDFS, Hive. • Public cloud expertise (Google Cloud preferred), especially around data processing & architecture. • Familiarity with AI/GenAI tools, MLOps, and OpenAI/Vertex AI services. • Knowledge of microservices architecture, APIs, Pub/Sub, Kubernetes-hosted platforms. • Tools: Erwin/IDA for modeling, Tableau/Qlik for BI, UML/Archimate/EA for design. 💡 Nice to have: • Experience with architecture frameworks (TOGAF, IFW), metadata management, unstructured data systems (NoSQL, graph DB). • Financial services/banking experience and stakeholder management in global setups. • Fluent English (Mandarin is a plus). Be part of a globally impactful program shaping how data powers decision-making at one of the world's largest institutions. 📩 Apply now and design solutions with real-world scale and significance.
Senior Observability Engineer for Data Middleware
Citigroup
⚲ Warszawa
Wymagania
- MeshIQ products
- DevOps
- CI/CD
- Ansible
- Shell
- Linux
- Java
- Python
Discover your future at Citi Working at Citi is far more than just a job. A career with us means joining a team of more than 230,000 dedicated people from around the globe. At Citi, you’ll have the opportunity to grow your career, give back to your community and make a real impact. Job Overview Exciting senior role in Observability with growth and flexibility. Primary responsibilities will be to provide engineering services for a new global MeshIQ Observability platform for Data Middleware products including: Kafka, Solace, MQ and EMS as well as web serving platforms and future support for ETL and BPM domains. The Candidate will provide senior engineering leadership for our operations team and thought leadership for our wider Observability roadmap and goals. You will be a core team member involved in the evaluation and planning of our Cloud Adoption strategy. Responsibilities: • Serve as a technology subject matter expert for internal and external stakeholders and provide direction for all firm mandated controls and compliance initiatives, all projects within the group and in creating a technology domain roadmap • Ensure that all integration of functions meet business goals • Define necessary system enhancements to deploy new products and process enhancements • Develop build deployment packaging and automation • Responsible for architecture and implementation of HA (High Availability) and DR/COB (Continuity of Business) • Manage security screening and vulnerability tracking for Citi's software certification process • Performance monitoring and L3 troubleshooting support • Integration with enterprise services such as identity management, logging, secrets management, ticketing, etc. Qualifications: • 5+ years' experience Solution Architecture, Analysis, Design, Development, Integration and supporting Observability solutions: • Observability platforms: MeshIQ products (Nastel Navigator, AutoPilot, X-Ray) would be ideal, but all relevant experience with platforms like Prometheus, Grafana, AppDynamics, ElasticSearch etc. • Automation scripting: Ansible, Linux shell, python, etc. • DevOps Tools: GitHub, Bitbucket, Harness, Jenkins, Artifactory, etc. • CICD Automation: experience deploying and troubleshooting common programming languages such as Java, Python, etc. • 3+ years’ past experience developing, integrating or supporting Data Middleware products, e.g. messaging, streaming, ETL, BPM, etc. • Experience performance tuning, optimization, monitoring, and troubleshooting • Experience in AWS/ GCP / Openshift is an added advantage but not mandatory We offer: • Private Medical Care Program • Life Insurance Program • Pension Plan contribution (PPE Program) • Employee Assistance Program • Paid Parental Leave Program (maternity and paternity leave) • Sport Card • Holidays Allowance • Sport and team recreation activities • Special offers and discounts for employees • Access to an array of learning and development resources • A discretional annual performance related bonus • A chance to make a difference with various affinity networks and charity initiatives