Oferty pracy
Filtry
Remote Senior Analytics Engineer
HeroCoders
⚲ Remote
34 000 - 36 000 PLN (B2B)
Wymagania
- SQL
- dbt
- ETL / ELT
- Data modeling
- Python
- BI
- SaaS
- API
- Atlassian (nice to have)
- HubSpot (nice to have)
- Mixpanel (nice to have)
- Amplitude (nice to have)
- Cursor (nice to have)
- Claude Code (nice to have)
- Startup experience (nice to have)
O projekcie: If you're searching for a remote role at a company that values its people as its greatest asset, respects work-life balance, and fosters a supportive environment for growth, HeroCoders could be the perfect fit for you. At HeroCoders, we build powerful apps that help people work more efficiently. As a Platinum Atlassian Partner, our tools are trusted by over 70,000 companies worldwide, with a few hundred thousand active users and a steady 33% annual growth rate. While we're proud of what we've achieved, there's still plenty of exciting work ahead. We’re a small, fully remote team of 20 people spread across the globe, and right now, we’re looking for a Senior Analytics Engineer to join us. Note: This recruitment process will be conducted entirely in English. You will be working with a team spread across the world. We require 4h overlap with typical CET/CEST business hours, which is 9am - 4pm. Who we're looking for: We're hiring our first dedicated data person - a Senior Analytics Engineer who will build our data foundation from the ground up. Today, our data lives in multiple systems (Atlassian Marketplace, HubSpot, Mixpanel, Metabase) with no central warehouse, no data transformation layer, and no shared metric definitions. You'll change that. This is a greenfield role with high ownership. You'll design and build the data platform, create key metrics of our success, and make data accessible to every team - Product, Marketing, Customer Success, and Leadership. You'll work closely with our CEO, Head of Product, and Head of Marketing. A fractional Analytics Lead will support you with architecture decisions, prioritization, and stakeholder alignment. Wymagania: You're a perfect match if you: - Have 5+ years of hands-on experience building analytics - you've set up a data warehouse, written transformation logic in dbt, and delivered SSOT (single source of truth) that business teams actually use - Are proficient in SQL and data modeling for analytics: star schemas, facts/dimensions, metric layers, and you understand why naming conventions and documentation matter - Have built ELT/ETL pipelines end-to-end: API ingestion, incremental loads, orchestration, monitoring, and recovery - and can explain the trade-offs you made - Can write Python (or similar) for integrations, API work, and lightweight automation - but know when a script is the right tool vs. when it's technical debt - Have worked with subscription/SaaS data and understand metrics like MRR, ARR, churn, retention cohorts, NRR, and LTV - Have delivered analytics that directly supported go-to-market, growth, or product-led teams - you've helped marketers measure CAC, helped PMs size features, or helped CS identify at-risk accounts - Have strong experience with BI tools, but you know that the future of BI is based on LLMs, chatbots creating reports for you (examples lightdash, evidence, omni) - Are comfortable working directly with business stakeholders - you can translate a vague question ("why is churn going up?") into a structured analysis and a clear answer - Are pragmatic: you pick the simplest architecture that solves the problem, document your decisions, and avoid over-engineering Nice to have: - Experience with Atlassian Marketplace, Hubspot - Experience with product analytics platforms (Mixpanel, Amplitude) - event modeling, instrumentation planning, and connecting product data to business outcomes - Experience being the first (or early) data hire at a startup - you know how to prioritize ruthlessly, build trust with stakeholders, and make progress without a large team - Comfort with AI-assisted development workflows (Cursor, Claude Code) - we want you to use these daily and expect our data stack to benefit from them too Codzienne zadania: - Define the Data Platform, setting up a cloud data warehouse (Snowflake), DBT (core/cloud), repositories, data alerting, BI system and AI based analytics engineering development working model (claude/cursor/conductor etc) - Ingest core data sources: Atlassian Marketplace API, HubSpot, and product databases (Postgres, MongoDB) - Build the SSOT data layer using dbt: clean, tested, version-controlled transformations - Deliver trusted retention and churn analytics (active customers, logo churn, revenue churn, Net Revenue Retention) - Establish a metric glossary with clear, agreed-upon definitions for core business metrics - Set up pipeline monitoring, data quality checks, and alerting so broken data is caught before it reaches a dashboard - Build LTV and unit economics models (CAC, ROAS, payback period) by connecting marketing spend data with downstream Marketplace and CRM data - Model the full customer lifecycle: trial → paid → expansion → churn, with cohort and conversion analytics - Fix and automate the Atlassian ↔ HubSpot data sync (reverse ETL), replacing fragile custom scripts with a reliable, maintainable solution - Identify cross-sell opportunities by modeling multi-product adoption across the 70K+ customer base - Build and maintain key dashboards for leadership, marketing, product, and customer success
Power BI Developer
AVENGA (Agencja Pracy, nr KRAZ: 8448)
⚲ Warszawa
27 720 - 30 240 PLN (B2B)
Wymagania
- Python
- Power BI
- Cloud
- Azure DevOps
- GitHub
- PySpark
- SQL
- Data base
- ETL
- SAS (nice to have)
O projekcie: Realizacja prac związanych z migracją danych i procesów ETL ze środowiska on-Premise (Oracle) do Azure (MS Fabric). Wymagania: - Wykształcenie wyższe z zakresu IT lub pokrewne - Min. 5 lat doświadczenie na podobnym stanowisku - Doświadczenie pracy z hurtowniami danych - Dobra znajomość zagadnień z zakresu chmury obliczeniowej publicznej (minimum Azure) poparta doświadczeniem pracy z rozwiązaniami z Azure DevOps, GitHub, MS Fabric, Azure Data Factory, PySpark - Zaawansowana znajomość programowania Python - Zaawansowana znajomość języka SQL i tematyki relacyjnych bazy danych (najlepiej Oracle) - Doświadczenie w modelowaniu danych (schematy gwiazdy, fakty/wymiary) oraz pracy z systemami ETL - Znajomość PowerBI w zakresie tworzenia modeli i programowania DAX - Atutem będzie praktyczna znajomość technologii SAS Institute (SAS 4GL) - Umiejętność rozwiązywania złożonych problemów w heterogenicznych środowiskach (on-premise i chmura) - Doświadczenie w metodykach Agile i Scrum - Łatwa i przejrzysta komunikacja Codzienne zadania: - Migracja i modernizacja istniejących procesów ETL oraz struktur danych Oracle ze środowiska on-premise do środowiska chmurowego Azure z wykorzystaniem narzędzi Microsoft Fabric - Utrzymanie i monitorowanie procesów zasilających - Tworzenie i utrzymanie wydajnych modeli danych, w tym tworzenie modeli analitycznych w MS PowerBI - Optymalizacja procesów pod względem wydajnościowym oraz kosztowym - Zapewnienie zgodności z regulacjami prywatności i bezpieczeństwa danych i integracja standardów bezpieczeństwa firmy z środowiskiem Azure - Współpraca z innymi jednostkami IT oraz dostawcami zewnętrznymi przy analizie i rozwiązywaniu problemów
Power BI Developer
apreel Sp. z o.o.
⚲ Warszawa
23 520 - 26 880 PLN (B2B)
Wymagania
- Power BI
- ETL
- Oracle
- Azure
- PowerBI
- Azure DevOps
- GitHub
- Azure Data Factory
- PySpark
- Python
- SQL
- DAX
- SAS
O projekcie: Klient: branża ubezpieczeniowa Stanowisko: Power BI Developer Oferta: - Lokalizacja: Warszawa, Wola; praca hybrydowa – wymagane wizyty w biurze 1x w tygodniu - Start: asap - Współpraca: długofalowa - Zatrudnienie: przez apreel na zasadach B2B - Stawka: do 160 PLN netto/h Wymagania: Wymagania: - Wykształcenie wyższe z zakresu IT lub pokrewne - Min. 5 lat doświadczenie na podobnym stanowisku - Doświadczenie pracy z hurtowniami danych - Dobra znajomość zagadnień z zakresu chmury obliczeniowej publicznej (minimum Azure) poparta doświadczeniem pracy z rozwiązaniami z Azure DevOps, GitHub, MS Fabric, Azure Data Factory, PySpark - Zaawansowana znajomość programowania Python - Zaawansowana znajomość języka SQL i tematyki relacyjnych bazy danych (najlepiej Oracle) - Doświadczenie w modelowaniu danych (schematy gwiazdy, fakty/wymiary) oraz pracy z systemami ETL - Znajomość PowerBI w zakresie tworzenia modeli i programowania DAX - Atutem będzie praktyczna znajomość technologii SAS Institute (SAS 4GL) - Umiejętność rozwiązywania złożonych problemów w heterogenicznych środowiskach (on-premise i chmura) - Doświadczenie w metodykach Agile i Scrum - Łatwa i przejrzysta komunikacja Codzienne zadania: - Migracja i modernizacja istniejących procesów ETL oraz struktur danych Oracle ze środowiska on-premise do środowiska chmurowego Azure z wykorzystaniem narzędzi Microsoft Fabric - Utrzymanie i monitorowanie procesów zasilających - Tworzenie i utrzymanie wydajnych modeli danych, w tym tworzenie modeli analitycznych w MS PowerBI - Optymalizacja procesów pod względem wydajnościowym oraz kosztowym - Zapewnienie zgodności z regulacjami prywatności i bezpieczeństwa danych i integracja standardów bezpieczeństwa firmy z środowiskiem Azure - Współpraca z innymi jednostkami IT oraz dostawcami zewnętrznymi przy analizie i rozwiązywaniu problemów
Mid/Senior Data Engineer
CodiLime
⚲ Warszawa
16 500 - 26 500 PLN (B2B)
Wymagania
- Snowflake
- dbt
- SQL
- Apache Airflow
- Azure Data Factory
- Docker
- Kubernetes
- CI/CD
- Data Lake
- OOP
- Git
- Apache Spark (nice to have)
- Azure Databricks (nice to have)
- GitHub Actions (nice to have)
- API Gateway (nice to have)
- FastAPI (nice to have)
- REST API (nice to have)
- Azure AI Search (nice to have)
- AWS OpenSearch (nice to have)
- ETL (nice to have)
- ELT (nice to have)
O projekcie: Get to know us better CodiLime is a software and network engineering industry expert and the first-choice service partner for top global networking hardware providers, software providers, and telecoms. We create proofs-of-concept, help our clients build new products, nurture existing ones, and provide services in production environments. Our clients include both tech startups and big players in various industries and geographic locations (US, Japan, Israel, and Europe). While no longer a startup, we have 250+ people on board and have been operating since 2011. We’ve kept our people-oriented culture. Our values are simple: - Act to deliver. - Disrupt to grow. - Team up to win. The project and the team The project is divided into two main parts: - A cloud-based platform for data visualization - A large-scale dataset combining information from over 10 different data sources You will spend approximately 70% of your time on data processing activities, contributing to the continuous improvement of the large dataset. The remaining 30% will focus on maintaining the platform, working with the API, and ensuring proper integration with the latest version of the dataset. The goal of the project is to build a centralized, large-scale business data platform for one of the biggest global consulting firms. The final dataset must be enterprise-grade, providing consultants with reliable, easily accessible information to help them quickly and effectively analyze company profiles during Mergers & Acquisitions (M&A) projects. You will contribute to building data pipelines that ingest, clean, transform, and integrate large datasets from more than 10 different data sources, resulting in a unified database containing over 300 million company records. The data must be accurate, well-structured, and optimized for low-latency querying. The dataset will power several internal applications, enabling a robust search experience across massive datasets and making your work directly impactful across the organization. The data will provide firm-level and site-level information, including firmographics, technographics, and hierarchical relationships (e.g., GU, DU, subsidiary, site). This platform will serve as a key data backbone for consultants, delivering critical metrics such as revenue, CAGR, EBITDA, number of employees, acquisitions, divestitures, competitors, industry classification, web traffic, related brands, and more. What else you should know: - Team Structure: - Data Architecture Lead - Data Engineers - Backend Engineers - DataOps Engineers - Frontend Engineer - Product Owner - Work culture: - Agile, collaborative, and experienced work environment. - As this project will significantly impact the organization, we expect a mature, proactive, and results-driven approach. - You will work with a distributed team across Europe and India. We work on multiple interesting projects at the same time, so it may happen that we’ll invite you to the interview for another project if we see that your competencies and profile are well suited for it. More reasons to join us - Flexible working hours and approach to work: fully remote, in the office, or hybrid - Professional growth supported by internal training sessions and a training budget - Solid onboarding with a hands-on approach to give you an easy start - A great atmosphere among professionals who are passionate about their work - The ability to change the project you work on Wymagania: Do we have a match? As a Data Engineer, you must meet the following criteria: - Strong experience with Snowflake and dbt (must-have). You will spend approximately 70% of your time working with dbt, SQL, Snowflake, and Airflow. - Strong SQL skills, including experience with query optimization - Experience with orchestration tools like Apache Airflow, Azure Data Factory (ADF), or similar - Experience with Docker, Kubernetes, and CI/CD practices for data workflows - Experience in working with large-scale datasets - Very good understanding of data pipeline design concepts and best practices - Experience with data lake architectures for large-scale data processing and analytics - Very good coding skills in Python - Ability to write clean, scalable, and testable code (including unit tests) - Understanding and applying object-oriented programming (OOP) - Experience with version control systems: Git - Good knowledge of English (minimum C1 level) Beyond the criteria above, we would appreciate the nice-to-haves: - Experience with data processing frameworks, such as Apache Spark (ideally on Azure Databricks) - Experience with GitHub Actions for CI/CD workflows - Experience with API Gateway, FastAPI (REST, async) - Experience with Azure AI Search or AWS OpenSearch - Familiarity with designing and developing ETL/ELT processes (a plus) - Optional but valuable: familiarity with LLMs, Azure OpenAI, or Agentic AI systems Technology stack: - Languages: Python, SQL - Data Stack: Snowflake + DBT - Workflow Orchestration: Apache Airflow (extensive use of complex DAGs) - Data Processing: Apache Spark on Azure Databricks - Cloud Environment: - AWS (EKS, S3, Lambda, ECR, EMR, Opensearch) - Platform - Azure (AKS, Blob Storage, Azure Functions, ACR, Databricks, Azure AI Search) - Dataset - CI/CD: GitHub Actions - Future Direction - AI & Advanced Automation - Building Agentic AI systems - Working with frameworks such as LangChain and cloud-native AI libraries - Integrating Azure OpenAI services - API: API Gateway, FastAPI (REST, async) Codzienne zadania: - Data Pipeline Development - Designing, building, and maintaining scalable, end-to-end data pipelines for ingesting, cleaning, transforming, and integrating large structured and semi-structured datasets - Optimizing data collection, processing, and storage workflows - Conducting periodic data refresh processes (through data pipelines) - Building a robust ETL infrastructure using SQL technologies. - Assisting with data migration to the new platform - Automating manual workflows and optimizing data delivery - Data Transformation & Modeling - Developing data transformation logic using SQL and DBT for Snowflake. - Designing and implementing scalable and high-performance data models. - Creating matching logic to deduplicate and connect entities across multiple sources. - Ensuring data quality, consistency, and performance to support downstream applications. - Workflow Orchestration - Orchestrating data workflows using Apache Airflow, running on Kubernetes. - Monitoring and troubleshooting data pipeline performance and operations. - Data Platform & Integration - Enabling integration of 3rd-party and pre-cleaned data into a unified schema with rich metadata and hierarchical relationships. - Working with relational (Snowflake, PostgreSQL) and non-relational (Elasticsearch) databases - Software Engineering & DevOps - Writing data processing logic in Python. - Applying software engineering best practices: version control (Git), CI/CD pipelines (GitHub Actions), DevOps workflows. - Ensuring code quality using tools like SonarQube. - Documenting data processes and workflows. - Participating in code reviews - Future-Readiness & Integration - Preparing the platform for future integrations (e.g., REST APIs, LLM/agentic AI). - Leveraging Azure-native tools for secure and scalable data operations - Being proactive and motivated to deliver high-quality work, - Communicating and collaborating effectively with other developers, - Maintaining project documentation in Confluence.
Microsoft Customer Insights Consultant / Developer
Deutsche Telekom TSI Hungary Kft.
⚲ Budapest, Debrecen, Pécs, Szeged
10 889 - 21 778 PLN (PERMANENT)
Wymagania
- CRM
- Microsoft Dynamics 365
- CDP
- Power Platform
- Lead management
- Kampánykezelés
- Data
- Journeys
- Microsoft Power Platform
- Adatmodellezési
- SQL
O projekcie: Cég leírása Magyarország legvonzóbb munkáltatója 2025-ben (a Randstad reprezentatív felmérése szerint) a Deutsche Telekom IT Solutions, a Deutsche Telekom Csoport leányvállalata. A cég több mint 5300 munkavállalójával széles körű IT és telekommunikációs szolgáltatásokat nyújt. Több száz nagyvállalati ügyféllel rendelkezik Németországban és más európai országokban.A DT-ITS 2019-ben elnyerte a HIPA „Legjobb Oktatási Együttműködés” díját, valamint ugyanebben az évben a „Legetikusabb Multinacionális Vállalat” elismerést. A cég folyamatosan fejleszti budapesti, debreceni, pécsi, illetve szegedi telephelyeit, és keresi a képzett IT szakembereket csapatába. Állás leírása A Deutsche Telekom MMS GmbH 2019 óta ügyfele és üzleti partnere vállalatunknak és alapvetően a DTAG csoport meghatározó innovációs és digitalizációs szolgáltatója. Ezen dinamikusan fejlődő üzletág magyarországi csapatába keresünk ambiciózus, idegennyelven magabiztosan beszélő, tapasztalattal rendelkező szakembert. Amennyiben szereted a kihívásokkal teli, változatos munkakört, hétköznapjaid az agilis munkavégzés jegyeiben töltöd és szívesen használod a nyelvtudásod a mindennapi kommunikáció során, Te vagy a mi emberünk! Formáld a jövő ügyfélélményét adatvezérelt megoldásokkal egy modern CRM és Customer Insights környezetben! 🚀 Amit ez a pozíció kínál - Részvétel komplex, adatvezérelt Customer Experience projektekben - Tanácsadói és fejlesztői feladatok kombinációja - Ügyfélfolyamatok stratégiai szintű alakítása modern Microsoft megoldásokkal - Olyan környezet, ahol az adat, az elemzés és az ügyfélélmény valódi üzleti értéket teremt 📩 Ha otthonosan mozogsz a CRM, Customer Insights és Data & Analytics világában, és szívesen segítesz ügyfeleknek jobb ügyfélélményt kialakítani, várjuk jelentkezésed! * Please be informed that our remote working possibility is only available within Hungary due to European taxation regulation. Wymagania: Elvárt tapasztalat és tudás - Több éves projekttapasztalat a CRM / marketing / customer experience területén (pl. Microsoft Dynamics 365 vagy ahhoz hasonló CRM / marketing platformok) - Magabiztos üzleti és ügyfélfolyamat-ismeret, különösen az alábbi területeken: - Lead management - Kampánykezelés - Customer journey-k - Ügyfélszegmentáció - Tapasztalat legalább az egyik alábbi megoldással: - Microsoft Dynamics 365 Customer Insights (Data és/vagy Journeys) - vagy más, hasonló Customer Data Platform (CDP) megoldás - Jó ismeretek a Microsoft Power Platform világában - Alapszintű adatmodellezési és SQL-ismeretek - Erős tanácsadói szemlélet, strukturált gondolkodás és ügyfélközpontú megközelítés Codzienne zadania: - Ebben a szerepben ügyfélélmény- és adatvezérelt projektek megvalósításán dolgozol, ahol a cél az ügyfélfolyamatok mély megértése és hatékony támogatása modern CRM és Customer Data Platform megoldásokkal.
Junior Strong Data Engineer (Databricks)
N-iX
⚲ Gdynia, Krakow, Wrocław
9 258 - 12 961 PLN (B2B)
Wymagania
- Python
- Databricks
- Azure
- Spark
- Kubernetes (nice to have)
- Docker (nice to have)
O projekcie: #4856 Join our team to work on enhancing a robust data pipeline that powers our SaaS product, ensuring seamless contextualization, validation, and ingestion of customer data. Collaborate with product teams to unlock new user experiences by leveraging data insights. Engage with domain experts to analyze real-world engineering data and build data quality solutions that inspire customer confidence. Additionally, identify opportunities to develop self-service tools that streamline data onboarding and make it more accessible for our users. Our Client was established with the mission to fundamentally transform the execution of capital projects and operations. Designed by industry experts for industry experts, our Client’s platform empowers users to digitally search, visualize, navigate, and collaborate on assets. Drawing on 30 years of software expertise and 180 years of industrial legacy as part of the renowned Scandinavian business group, Client plays an active role in advancing the global energy transition. The company operates from Norway, the UK, and the U.S. Wymagania: - Programming: Minimum of 1-2 years as data/software engineer, or in a relevant field. - Python working knowledge: Coding experience in Python, particularly in delivering/maintaining data pipelines and troubleshooting code-based bugs. Experience working with large codebases, working with an IDE and Git. - Data Skills: Structured approach to data insights and diagnostic skills for data related issues. - Cloud: Familiarity with cloud platforms (preferably Azure). - Data Platforms: Knowledge of Databricks, Snowflake, or similar data platforms. - Database Skills: Knowledge of relational databases, and working experience with SQL. - Big Data: Experience using Apache Spark/PySpark is a plus. - Documentation: Experience in creating and maintaining structured documentation. - Testing: Proficiency in utilizing testing frameworks (pytest) to ensure code reliability and maintainability. - Version Control: Experience with Git and Gitlab or equivalent. - English Proficiency: B2 level plus. - Interpersonal Skills: Strong collaboration abilities, willing to learn new skills and tools, adaptive and exploring mindset. We're looking for candidates that do not fear to reach out to others. Nice to have: - Knowledge of Docker and Kubernetes. - Ability to travel abroad once/twice a year for an on-site team building (Oslo, Norway). Codzienne zadania: - Design, build, and maintain data pipelines using Python. - Collaborate with an international team to develop scalable data solutions. - Conduct in-depth analysis and debugging of system bugs (Tier 2). - Develop and maintain smart documentation for process consistency, including the creation and refinement of checklists and workflows. - Set up and configure new tenants, collaborating closely with team members to ensure smooth onboarding. - Write integration tests to ensure the quality and reliability of data services. - Work with Gitlab to manage code and collaborate with team members. - Utilize Databricks for data processing and management.
Junior Delivery Engineer
MasterBorn
⚲ Wrocław
6 000 - 10 000 PLN (B2B)
Wymagania
- SQL
- database
- Microsoft Software Suite
- ETL
- ELT
- Python (nice to have)
- AzureDevops (nice to have)
- Jira (nice to have)
O projekcie: Meet MasterBorn, a world-class software development company driving success for businesses in FinTech, SportsTech, and MedTech, led by a passionate team committed to client product success. We are currently improving our world-class service with a Delivery Engineer role. As a Delivery Engineer, you'll play a pivotal role in ensuring the seamless delivery and optimization of our product for our esteemed retail clientele. Dive into a world of problem-solving and collaboration, where your technical prowess and excellent communication skills will be key in driving the success of our service and application features as well as its quality and performance. - Required: 1-2 years of commercial experience with SQL; basic programming skills; Microsoft Software Suite; Polish C1; English B2 - Offer: B2B/UZL, paid days off, 100% remote (or hybrid/onsite - as you prefer), full-time position, long-term contract, polish working hours (10:00 - 18:00) - Perks and Benefits: Budgets for self-development, integrations, healthcare & sport activities, Mentoring Program, Individual Career Path, Paid Time Off, Paid Health Break and more - Tools you’ll use: MS Teams, MS Office Suite, Azure Devops, Jira, Miro, Windows/iOS, MS SQL, PostgreSQL (Greenplum), Snowflake, Python, Powershell Recruitment Process: - Send us your CV :) - Phone call with our Recruiter (15min) - Soft-skills interview with our Recruiter (1h) - Technical interview with Senior Specialist (2h) - Feedback and decision! Wymagania: About you - Tech skills & Experience: - Highly proficient in SQL (queries/data selection, joins, complex queries, data definition) - Familiarity with database design, administration and data modeling - Basic understanding of core programming concepts - Experience with various data formats (csv, json, xml, tlog, Parquet) About you - Soft Skills: - Strong written and verbal communication skills with the ability to present complex technical information in a clear and concise manner to a variety of audiences - Ability to communicate with both technical and non-technical customers on a variety of issues. - Ability to select and prioritize tasks within a backlog - Polish level C1 to communicate efficiently within the team - English level B2 minimum - direct communication/documentation/tasks/other reading/speaking/writing - Strong problem-solving abilities and detail orientation to diagnose issues, suggest solutions and make decisions based on requirements Nice to have: - Experience working with DBT Core - Experience working with Snowflake - Understanding of AzureDevOps and release pipelines - Experience processing and analyzing large data sets - Experience in software implementation or Enterprise SaaS solutions, preferably in retail - Familiarity with retail industry and data - Python experience (preferably I/O, Pandas, OOB) - General knowledge of version control (preferably GIT) - Experience with project management tools (preferably JIRA) Codzienne zadania: - Work with existing retail customers to configure and update the product, executing to the documented requirements of the project - Execute the Implementation project through all stages - Ingest and map the client’s source data to a standard data model (ETL and ELT process) - Effectively translate complex customer requirements, recommend system solutions, and help formulate detailed specifications - Leverage appropriate resources from company and customer, coordinating availability to maximize productivity - Gain a deep understanding of how our products work, how they interact with each other, and how to build reliable and reusable processes for support - Act as a problem-solving expert, proactively identifying issues, analyzing root causes, and implementing the solution effectively - Create documentation used for an on-going support
Database Engineer
Devire
⚲ Remote
23 520 - 28 560 PLN (B2B)
Wymagania
- Data
- PostgreSQL
- MongoDB
- Python
- Bash
- Kubernetes
- Helm
O projekcie: Devire Outsourcing IT to forma współpracy dedykowana dla specjalistów z branży IT, oparta na zasadach własnej działalności gospodarczej - B2B, realizująca projekty dla Klientów prowadzących innowacyjne i nowoczesne projekty. Dla naszego międzynarodowego Klienta, firmy doradczo-technologiczna specjalizującej się w transformacji cyfrowej, inżynierii oprogramowania, chmurze i danych poszukujemy osoby na stanowisko Database Engineer. - Lokalizacja: współpraca zdalna - Stawka: 140-170 zł/h net+VAT - Wynagrodzenie w oparciu o umowę B2B (via Devire) - Współpraca 100% zdalna - Pakiet benefitów (opieka medyczna, karta multisport itp.) - Długoterminowa współpraca Wymagania: - Minimum 4 lata doświadczenia w zarządzaniu dużymi, wysokodostępnymi systemami baz danych (PostgreSQL / MongoDB) - Doświadczenie w automatyzacji z wykorzystaniem Python lub Bash - Udokumentowane doświadczenie w pracy z Kubernetes oraz Helm. - Bardzo dobra znajomość języka angielskiego w mowie i piśmie (minimum poziom B2+). Codzienne zadania: - Projektowanie, wdrażanie i utrzymanie infrastruktury baz danych z wykorzystaniem StatefulSets, Operatorów oraz Helm charts, zapewniających niezawodność, samonaprawialność i skalowalność. - Zarządzanie pełnym cyklem wdrożeniowym klastrów baz danych wraz z kontrolą wersji Helm charts i szablonów konfiguracyjnych. - Wsparcie i administracja produkcyjnymi systemami bazodanowymi poprzez aktywne monitorowanie wydajności, bezpieczeństwa oraz dostępności. - Przeprowadzanie aktualizacji i migracji (major i minor releases) bez przestojów – tworzenie oraz utrzymanie Helm hooks i dedykowanych skryptów automatyzujących operacje stateful. - Optymalizacja wydajności backendowych magazynów danych, zapewnienie spójności i integralności danych w trakcie cyklu życia podów. - Tworzenie i rozwijanie narzędzi automatyzacyjnych (Bash, Python) upraszczających zarządzanie środowiskiem Kubernetes (np. provisioning użytkowników/secrets, monitoring stanu klastra).
Data Engineers - Remote Poland or Romania
Strategicsiq
⚲ Poland (Remote), Bucharest
250 - 300 EUR/dzień netto (B2B)
Wymagania
- ETL
- REST API
- GCP
- Data modeling
- Kafka
- Java
- SOAP
- Big Data
Role Overview We are seeking a highly skilled Senior Data Engineer / ML Engineer with strong expertise in building scalable ELT/ETL pipelines, cloud-native data architectures, and API-driven integrations. The ideal candidate will have hands-on experience in Google Cloud Platform (GCP), Data Vault modeling, and the ability to translate complex business logic into robust engineering pipelines and data models. Key Responsibilities • Design, develop, and maintain scalable ELT/ETL data pipelines on GCP. • Build and optimize data engineering pipelines for structured, semi-structured, and streaming data. • Implement and manage ML engineering workflows for production deployment. • Develop and maintain REST/SOAP APIs for system integrations. • Architect and implement Data Vault models for enterprise data warehousing. • Design event-driven architectures using Pub/Sub and streaming frameworks. • Integrate third-party systems into enterprise data ecosystems. • Collaborate with business stakeholders to translate business requirements into scalable data models and pipelines. • Ensure adherence to security best practices (IAM, VPC Service Controls, encryption, service accounts). • Optimize SQL queries and data performance in BigQuery. • Support reporting and analytics solutions (Power BI preferred). Required Skills & Expertise • Strong experience building scalable ELT/ETL pipelines, ML engineering workflows, and API integrations (REST/SOAP). • Hands-on expertise within the GCP ecosystem: Dataform, Cloud Run, Pub/Sub, BigQuery (advanced SQL), GCS, and Firestore. • Solid understanding of Data Vault modeling and enterprise data architecture principles. • Experience designing event-driven and streaming architectures (Kafka, Pub/Sub). • Working knowledge of GKE, Spark, and Java-based data processing solutions. • Familiarity with MCP integration and TMF protocols. • Strong ability to translate business requirements into scalable data models and engineering pipelines. • Experience implementing cloud security best practices including IAM, VPC Service Controls, service accounts, and encryption. • Exposure to Power BI for reporting and analytics is an added advantage.
Data Engineer (AWS + GCP)
Square One Resources
⚲ Warsaw, Cracow
21 840 - 28 560 PLN (B2B)
Wymagania
- AWS
- GCP
- SQL
- Python
- Java
- Scala
- Jenkins
- Kubernetes
- Docker
- Terraform
- BigQuery
O projekcie: START DATE: 16/03 An international consulting firm that helps companies of all sizes have a better impact on the world. Company capabilities focus on supporting the private and public sectors with their people, processes, and digital technology challenges. - You will be part of a team designing and implementing data pipelines across multi-cloud environments for an international consulting firm. - The role includes transforming workflows between cloud data services and Oracle databases, optimizing performance and collaborating with teams using Agile/Scrum methodologies. Wymagania: - Strong experience with cloud data services on both Google Cloud Platform (GCP) and Amazon Web Services (AWS) - e.g. BigQuery, Pub/Sub, Dataflow, S3, Glue, Redshift - Expertise in Oracle SQL, PL/SQL, and working with complex stored procedures and large datasets - Proficiency in programming languages such as Python, Java, or Scala - Experience with streaming and messaging systems (e.g. Jenkins, GitLab), and container orchestration (Kubernetes) - Deep understanding of data modelling, data warehousing, and ETL/ELT design patterns - Familiarity with Agile development practices (Scrum, Kanban, Jira) - Exposure to financial markets, trading systems, or related high-performance environments is a strong plus Nice to have: - Knowledge of regulatory reporting, market data, or trade surveillance systems - Experience with Apache Airflow, DBT, or similar orchestration tools - Understanding of data security practices and compliance frameworks Codzienne zadania: - Design and implement scalable and secure data pipelines across multi-cloud (GCP & AWS) environments - Develop integration and transformation workflows between cloud data services and on-prem Oracle databases - Work closely with trading, risk, and analytics teams to understand data requirements and deliver real-time and batch data solutions - Optimise and monitor performance of data systems to support latency-sensitive trading applications - Collaborate with cross-functional teams using Agile/Scrum methodologies to deliver business-critical data projects, provide technical assistance to the team - Ensure robust data governance, lineage, and compliance (including MiFID II, FCA, and other regulatory standards) - Automate data workflows using Terraform, CI/CD pipelines, and containerisation tools (Docker/Kubernetes)
Data Engineer
KRAMP SAS
⚲ Stare Miasto k/Konina (PL), Konin, Łódź, Poznań
Wymagania
- Git
- DATA MODELLING
- Docker
- Linux
- Java
- SQL
- Python
We love to see people grow. We empower our product teams, farmers, dealers & suppliers to move forward. Driving the digital future of Kramp Group by discovering and building great products, making life easy for our customers. Kramp Group discovers and builds e-commerce, supply chain, finance & data science solutions that are aiming at giving the ag-tech industry a push forward. We’re building a digital platform, serving thousands of customers and generating a turnover of over one billion euros per year. Who is your team? You will be the part of Data Office which is an international cross-functional team, combining: • Data Modernization; • Data Foundation. As a Data Foundation team, we are driving data innovations from the core making data work securely, scalable, and smartly. Our mission? To build and maintain the foundation that powers Kramp’s data-driven future. That means enabling modern, secure, and scalable data infrastructure that ingests raw data and transforms it into golden assets across the organization. We’re not just supporting the business - we’re leading transformation. With cutting-edge tools like DBT Cloud, a modern tech stack, and a Data Mesh approach, we’re creating an environment where AI, machine learning, and automation can thrive. This is the team shaping what’s next: from foundational pipelines to building future-ready AI tools. If you’re excited about real impact, modern tech, and being part of something big - we’re just getting started. The team is building the foundation of Kramp's self-service analytics platform in Google Cloud Platform-based data warehouse (DWH). Feeding different targets as: • Data visualization tools (e.g. Tableau); • Data-driven e-commerce applications; • Kramp's Machine Learning platform. What to expect: • The Data Office is international, and cross-functional team, combining Business Intelligence, DevOps, FinOps, Data Science, and Software Development; • Working within Agile SCRUM Methodologies and the Atlassian toolchain (JIRA, Confluence,GitHub, Git); • To build our state-of-the-art cloud data warehouse, we use Python, Java, BigQuery SQL, other Google Cloud Platform services such as Cloud Build, Data transfers, Workflows, Schedulers and Functions; • Making use of dbt cloud, Terraform and Data Modelling Tools; • You will get deep insights into the work at an international wholesaler being in the middle of the digital transformation, you can contribute and make a real impact; • Freedom to initiate new ideas and develop these further. What will help you be successful in this role? • Openness to engage in continuous learning and agility in addressing business challenges; • Experience in designing, developing, documenting, and maintaining software written in Python and/or Java as well as dataflows written in SQL; • Knowledge of different Data Modelling principles and selecting the right one depending on the needs of a consumer; • Feeling comfortable working with Linux, git, and Docker to extend and maintain our CI/CD workflows and pipelines or are you eager to dive into this topic; • Experience in working in cloud environments (preferably GCP or AWS); • Experience or knowledge of Infrastructure-as-Code applications like Terraform and data transformation tools as dbt and Dataform; • Cultural awareness, sensitivity, and adaptability in building connections with people from diverse backgrounds; • Excel conceptual thinking and a proactive approach to decision-making, taking ownership to ensure project success. • Experience with Striim is an advantage. What do we offer? We are an ambitious, fast-growing and financially sound organization. You can obviously expect: • A dynamic role in an international environment • Comprehensive training and a structured onboarding process • A clearly defined development path • Access to modern tools and systems • Stable employment under a full time employment contract • Annual bonus dependent on KRAMP results • Attractive social package (private medical care, subsidy for Multisport card, social fund) • Generous hybrid model Do you want to join us? In case you feel challenged by the above role description and requirements, we are interested in meeting you! Please use the application button and send your CV and cover letter. If you do not fully meet all requirements for this role, and you think you would be a good match for us, please apply and convince us. Do you have a question about this role? Please do not hesitate to contact Patrycja Bąk, Talent Acquisition Specialist, patrycja.bak@kramp.com About us Thanks to our ambitions we are active throughout Europe: a leading position in the agricultural market for parts. We heavily invest in E-Business and innovations. Our aim is to make it easy for our customers to run their business. Working at Kramp means working with over 3500 international colleagues. The people at Kramp are enthusiastic, professional and involved. We speak our customers’ language, understand their challenges and earn their trust to exceed their expectations. Our promise “It’s that easy” is embedded in our culture and DNA. Working at Kramp means: … you belong here! Being part of our open business community … your work makes a difference! Have an impact on what matters … you will develop and realize! Learn and realize your potential More about Kramp: Careers at Kramp – Join Europe’s Leading Partner For Agriculture, Forest and Landscaping and Construction | Kramp #LI-PB1
Traded Risk Process Manager (MR) - fixed term contract / 12 months
HSBC Technology Poland
⚲ Kraków
15 775 - 23 375 PLN (PERMANENT)
Wymagania
- Degree
- Python
- risk management
O projekcie: Some careers shine brighter than others. If you’re looking for a career that will help you stand out, join HSBC, and fulfil your potential. Whether you want a career that could take you to the top, or simply take you in an exciting new direction, HSBC offers opportunities, support and rewards that will take you further. Your career opportunity This role resides under Traded & Treasury Risk Management function and is part of the global Risk & Compliance organization. Traded Risk function monitors, evaluates and manages market and counterparty risks impacting across the HSBC Group. It is responsible for establishing the global risk management framework for Traded Risk and plays a key role in ensuring that the Bank is compliant with current and future regulatory requirements relating to Traded Risks and their capitalization. Traded Risk Process Manager acts as a subject matter expert with respect to market risk process focusing on quantitative analysis of risk metrics. He is a results-driven individual who aspires to innovate, improve and embed solutions that enable effective day-to-day risk management. What we offer - Competitive salary - Annual performance-based bonus - Additional bonuses for recognition awards - Multisport card - Private medical care - Life insurance - One-time reimbursement of home office set-up (up to 800 PLN). - Corporate parties & events - CSR initiatives - Nursery discounts - Financial support with trainings and education - Social fund - Flexible working hours - Free parking Wymagania: What you need to have to succeed in this role - University graduate in quantitative related degree (e.g. finance, mathematics, econometrics etc.), - 2+ years of relevant experience in banking e.g. Risk, Product Control, Front Office, Middle Office etc. - Strong coding skills in Python are required, - In-depth knowledge of financial markets, trading business as well as market risk concepts, - Strong understanding of key risk factors for different asset types, risk sensitivities and how they are measured as well as risk management techniques, - Desirable previous hands-on exposure in at least one of the following subjects: VaR, IRC, RWA reporting and corresponding regulatory framework (inc. FRTB), - Preferably international financial/risk accreditation e.g. CFA, FRM, PRM. Advanced knowledge of Microsoft Office is a must, - Excellent analytical and problem-solving skills. Strong interpersonal and communication skills (with English both written and oral at B2 level or higher). Excellent presentation skills to brief senior management on topics and summarize key information. Good business analysis skills will be an asset. Codzienne zadania: - Support consolidated risk reporting and analysis at different levels across various HSBC lines of business, - Support risk managers with implementation, calculation, calibration and maintenance of risk measures (e.g. sVaR, VaR, RNIV) as well as development of new risk reports, - Lead, coordinate or support internal initiatives to deal with regulatory challenges as well as optimise the control framework of the Traded Risk function, - Support an initiative to migrate existing tools and EUCs to strategic platforms, utilizing Python, - Support the change delivery of risk systems by providing requirements, assisting development and UAT, testing, - Ensure alignment of function with Global Operating Model, - Build and maintain relationships with key stakeholders including Risk Managers, Front Office, Middle Office, Product Control, etc.
Streaming Data Engineer
Reply Polska
⚲ Katowice
20 160 - 26 880 PLN (B2B)
Wymagania
- Python
- SQL
- Data warehouse
- AWS
- Azure
- Java
- Kafka
- Databricks (nice to have)
O projekcie: About Data Reply As part of the Reply group, Data Reply offers a wide range of services that support customers in becoming data driven. We operate in various industries and business areas and work intensively with our customers so that they can achieve meaningful results through the effective use of data. Data Reply offers many years of experience in transformation projects to achieve “data-driven companies”. We focus on the development of data platforms, machine learning solutions and streaming applications - automated, efficient, and scalable - without making any compromises in IT security. Benefits - Motivizer Benefits Platform to choose and manage all your benefits in one place. You receive a budget (550 PLN monthly). You can choose medical care package, meal tickets, sports cards (we have Multisport and on preferential terms, we have membership cards to one of the most popular Gyms), cinema tickets, shop vouchers, discounts and many more. - Language Courses – you'll have access to a multi-language learning platform enabling you to practice you language skills and learn new ones! - Regular and systematic further training opportunities - both internally and from external providers. We support your ongoing learning and development. - Cooperation within an internal community is our everyday reality. We have networking events, coding challenges, and company parties for different occasions. Wymagania: - Educational Background: Bachelor’s or master’s degree in CS, Engineering, IT, or a related field - Streaming Technologies: Knowledge of the Kafka ecosystem (Kafka clusters, connectors and clients, Kafka Streams) and other streaming platforms (e.g. Flink) - Programming Language: Strong programming skills in languages such as Python, Java or Kotlin - Cloud: Experience with public cloud providers AWS/Azure - Database Technologies: Knowledge of SQL and NoSQL databases. - Expertise in Big Data Technologies: Familiarity with big data frameworks and tools like Spark, formats such as Apache Iceberg - Containerization: Experience in Docker, familiarity with Kubernetes is nice to have - Data Modeling and Warehousing: Familiarity with data modeling and warehousing techniques - Strong Analytical Skills: Ability to analyze complex data structures and derive insights to provide strategic guidance - Excellent Communication: Strong interpersonal and communication skills to effectively collaborate with team members and clients. - Very good knowledge of English and Polish - Problem Solving: Strong problem-solving skills and the ability to propose creative, efficient solutions to complex problems - Nice to have experience with Databricks, DBT - Availability to work in a hybrid mode with at least 2 visits to the office per month Codzienne zadania: - Streaming Data Processing: Design, implement, test and maintain highly scalable real-time stateless and stateful data transformations, ensuring low latency and data quality - Data System Design: Design and implement robust, scalable data processing systems: this involves selecting appropriate storage technologies, designing schemas, and planning integration strategies - Data Integration and ETL Development: Develop and maintain pipelines for data transformation, integration, and ETL processes. Ensure data quality and accessibility - Performance Optimization: Monitor, tune, and optimize data applications and database performance. Address any issues that may affect data processing speeds or analytics capabilities - Consulting and Strategy: Provide expert advice and consultancy services to clients on data strategies, architecture choices, and technological advancements - Analytics and Business Intelligence Support: Assist in developing analytics platforms and business intelligence solutions, ensuring that data can be effectively transformed into actionable insights - Client Interaction: Work closely with clients to understand their business needs and technical requirements. Translate these requirements into effective data engineering solutions
Senior Data Engineer (Ai&ML)
Harvey Nash Technology
⚲ Warszawa
30 000 - 40 000 PLN (PERMANENT)
Wymagania
- Python
- AWS
- Databricks
- Spark (nice to have)
- Scala (nice to have)
O projekcie: Data Engineer – Cloud & AI Platforms We’re looking for a Data Engineer to design and scale high-throughput data pipelines supporting advanced analytics and AI-driven products. What You’ll Do - Architect and maintain distributed data pipelines in Databricks and AWS (Glue, EMR, Fargate, Step Functions) - Ingest and process large volumes of structured and unstructured data (internal, market, third-party, alternative sources) - Collaborate with AI/ML and engineering teams to design scalable data architectures and APIs - Optimize performance and cost using Spark and cloud-native best practices - Implement data governance, privacy, lineage, and access controls - Build automated validation, monitoring, and data quality frameworks - Evaluate emerging GenAI and data tooling to enhance platform capabilities What You Bring - 3+ years of experience in data engineering - Strong Python and experience with Spark or Scala - Proven experience building distributed pipelines in cloud environments - Solid understanding of data modeling, architecture, and warehousing principles - Innovative problem-solving mindset - Bachelor’s or Master’s degree in Computer Science or Engineering Nice to have: Experience with graph databases.
Senior Data Engineer
Link Group
⚲ Warszawa
26 000 - 33 000 PLN (PERMANENT)
Wymagania
- Data engineering
- dbt
- Snowflake
- Fivetran
- SQL
- Git
- Terraform
- HR/People Analytics (nice to have)
O projekcie: Senior Data Engineer Join our team and help build modern, scalable data solutions that power strategic HR decisions. In this role, you will work with cloud technologies, robust data pipelines, and sensitive datasets to deliver high-quality analytics for a global organization. Wymagania: - 4+ years of experience in data engineering - Hands-on knowledge of DBT, Snowflake, Fivetran, and SQL or similar solutions - Understanding of data privacy and governance - Ability to work independently and collaboratively in a global environment. - Experience with Git and Terraform. - HR/People Analytics experience is a plus Codzienne zadania: - Build and maintain data pipelines and models supporting our analytics. - Work with Snowflake, DBT, and Fivetran to integrate and transform data from multiple systems. - Ensure data quality through validation frameworks and testing. - Support data governance and secure handling of sensitive information. - Collaborate with international teams to deliver reliable, analytics-ready datasets. - Automate infrastructure with Terraform and manage code in Git. - Create PowerBI dashboards based on curated data.
Regular Data Analyst
Ework Group
⚲ Wrocław
120 - 150 PLN (B2B)
Wymagania
- Data modeling
- SQL
- PowerBI
- Agile
- Databricks
- Azure DevOps (nice to have)
O projekcie: 💻 Ework Group - founded in 2000, listed on Nasdaq Stockholm, with around 13,000 independent professionals on assignment - we are the total talent solutions provider who partners with clients, in both the private and public sector, and professionals to create sustainable talent supply chains. With a focus on IT/OT, R&D, Engineering and Business Development, we deliver sustainable value through a holistic and independent approach to total talent management. By providing comprehensive talent solutions, combined with vast industry experience and excellence in execution, we form successful collaborations. We bridge clients and partners & professionals throughout the talent supply chain, for the benefit of individuals, organizations and society. 🔹 For our Client we are looking for Data Analyst - 2 days at the office in Wroclaw🔹 ✔️ We offer: - B2B agreement - Transparent working conditions with both Ework and the client - Current support during our cooperation - Possibility to work in an international environment - Collaborative environment in Swedish organizational culture - Private medical care - Life insurance - Multisport - Teambuilding events Contact person: nicola.urbanska@eworkgroup.com Client code: AV01 Wymagania: ✔️ Basic Requirements: - Experience working with data (preferably financial data) - Understanding of data validation and data modeling concepts - Strong SQL knowledge - Advanced Power BI - Basic working knowledge of Databricks for data verification - Experience working in Agile teams - Familiarity with Scrum framework - nice to have - Experience with tools such as Jira or Azure DevOps - Strong communication and collaboration skills Codzienne zadania: - Participate in data and finance-related projects delivered on cloud platforms - Support data validation and ensure delivered data meets business quality expectations - Contribute to data modeling activities (e.g. logical and dimensional models) - Analyze financial datasets and support definition of KPIs and business rules - Collaborate with Data Engineers and Product Owner on data requirements and user stories - Assist in identifying data issues and performing root cause analysis - Use basic knowledge of Databricks to verify and analyze data
Databricks Data Engineer
Upvanta
⚲ Remote
22 000 - 24 000 PLN (B2B)
Wymagania
- Data engineering
- Databricks
- SQL
- PySpark
- Python
- ETL
- Data warehouses
- Azure
- AWS
- GCP
- Big data
- Kanban
- DevOps
O projekcie: Do naszego zespołu poszukujemy doświadczonego Databricks Data Engineera, który będzie odpowiedzialny za projektowanie, budowę oraz rozwój nowoczesnych rozwiązań data engineeringowych w środowisku Big Data i chmurze publicznej. Wymagania: Wymagania: - Minimum 5 lat doświadczenia w obszarze Data Engineering - Minimum 2 lata doświadczenia w pracy z Databricks - Bardzo dobra znajomość: SQL, PySpark, Python - Doświadczenie w obszarze hurtowni danych, ETL, przetwarzania rozproszonego oraz modelowania danych - Znajomość koncepcji: Data Marts, Data Warehouses, Data Lakes, Data Mesh - Doświadczenie w pracy z co najmniej jedną chmurą publiczną (Azure, AWS lub GCP) - Znajomość technik projektowania relacyjnych baz danych oraz nierelacyjnych systemów przechowywania danych - Doświadczenie w pracy z danymi o różnej strukturze - Umiejętność analitycznego rozwiązywania problemów w środowisku Big Data - Bardzo dobra znajomość języka angielskiego (w mowie i piśmie) - Doświadczenie w pracy w metodykach Agile (Scrum, Kanban) oraz znajomość zasad DevOps i CI/CD Codzienne zadania: - Projektowanie i rozwój rozwiązań opartych o Databricks - Budowa i optymalizacja procesów ETL/ELT - Tworzenie oraz rozwój hurtowni danych, Data Martów i Data Lake - Modelowanie danych oraz projektowanie architektury przetwarzania danych - Praca z danymi ustrukturyzowanymi, pół-ustrukturyzowanymi i nieustrukturyzowanymi - Optymalizacja zapytań i procesów przetwarzania w środowiskach rozproszonych - Współpraca z zespołami analitycznymi, BI oraz biznesem - Praca w środowisku Agile z wykorzystaniem dobrych praktyk DevOps i CI/CD
Database Engineer
ITFS Sp. z o.o.
⚲ Warszawa
25 200 - 30 240 PLN (B2B)
Wymagania
- Python
- SQL
- pandas
- PostgreSQL
- GCP
- BigQuery
- Cloud
- Storage
- Liquibase
- Docker
- Bash
- High availability
- IAM
O projekcie: - Miejsce pracy: 100% zdalnie (możliwe spotkania w oddziale w Warszawie raz na miesiąc/dwa miesiące) - Start pracy: ASAP - Forma współpracy: B2B - Stawka: 150-180 zł/h netto+VAT Wymagania: - min. 3 lata doświadczenia w pracy jako DBA (PostgreSQL) lub Data Engineer w środowisku chmurowym - bardzo dobra znajomość PostgreSQL (dogłębna wiedza na temat silnika, optymalizacji zapytań i zarządzania schematami - doświadczenie z GCP - znajomość usług AlloyDB, BigQuery, Cloud Storage oraz Datastream - umiejętność pracy z Liquibase oraz Docker - umiejętność pisania skryptów Bash - podstawowa znajomość Python do automatyzacji zadań administracyjnych - wiedza z zakresu Backup & Recovery, Disaster Recovery, High Availability (HA) oraz bezpieczeństwa (IAM) - umiejętność projektowania relacyjnych struktur bazy danych pod kątem wydajności i skalowalności - doświadczenie w pracy w zespole Scrum i komunikatywność - zaawansowana znajomość j.angielskiego (poziom min B2/C1) Codzienne zadania: - zarządzanie infrastrukturą AlloyDB (bieżąca administracja, monitorowanie i dbanie o wysoką dostępność systemów bazodanowych) - projektowanie i wdrażanie struktur bazy danych dla nowego modułu zarządzania kontraktami - analiza planów zapytań i tuning bazy, aby zapewnić płynne działanie systemu - konfiguracja i nadzór nad procesami ETL/CDC - zarządzanie zmianami w schematach za pomocą Liquibase w ramach pipeline'u CI/CD - implementacja polityk dostępu (IAM), audyt uprawnień oraz dbanie o zgodność z polityką bezpieczeństwa danych - współpraca z deweloperami
Data Senior Engineer
Scalo
⚲ Kraków, Warszawa
26 040 - 27 720 PLN (B2B)
Wymagania
- Google cloud platform
- Cloud Run
- Cloud Build
- PUB
- Sub
- BigQuery
- Cloud SQL
- Looker Studio
- CI/CD
O projekcie: W Scalo zajmujemy się dostarczaniem projektów software'owych i wspieraniem naszych partnerów w rozwijaniu ich biznesu. Tworzymy oprogramowanie, które umożliwia ludziom dokonywanie zmian, działanie w szybszym tempie oraz osiąganie lepszych rezultatów. Jesteśmy firmą, która wykorzystuje szerokie spektrum usług IT, żeby pomagać klientom. Obszary naszego działania to m.in.: doradztwo technologiczne, tworzenie oprogramowania, systemy wbudowane, rozwiązania chmurowe, zarządzanie danymi, dedykowane zespoły developerskie. Cześć! U nas znajdziesz to, czego szukasz - przekonaj się! W Scalo czeka na Ciebie - Praca hybrydowa (Warszawa lub Kraków) - 6 wizyt w biurze w miesiącu - Stawka: do 165 zł netto + VAT/H/B2B Co dla Ciebie mamy: - stabilną współpracę i różnorodne projekty (Software, Embedded, Data, Cloud Services), - możliwość rozwoju kompetencji jako Ambasador Scalo - doskonal skille techniczne i presalesowe w społeczności naszych ekspertów, wpływaj na rozwój organizacji, wspieraj innych, - przestrzeń do budowania marki osobistej - twórz wartościowe treści, dziel się wiedzą i doświadczeniem, występuj jako prelegent na wydarzeniach (np. ProMeet), - prywatną opiekę medyczną - Luxmed, - kafeteryjny system benefitów - Motivizer, - dostęp do karty Multisport, - program poleceń pracowniczych z nagrodami, - integracje i wydarzenia firmowe. Brzmi interesująco? Aplikuj – czekamy na Twoje CV! Wymagania: Ta oferta jest dla Ciebie, jeśli: - Masz solidne doświadczenie w tworzeniu mikroserwisów na Google Cloud Platform, w szczególności z wykorzystaniem Cloud Run, Cloud Build, Pub/Sub, BigQuery, CloudSQL oraz Looker Studio, - Znasz najlepsze praktyki architektury chmurowej, procesów CI/CD oraz potrafisz optymalizować rozwiązania pod kątem wydajności, skalowalności i kosztów, - Potrafisz analizować złożone problemy techniczne i biznesowe oraz przekładać je na skuteczne rozwiązania, - Masz doświadczenie we współpracy ze stakeholderami na poziomie senior i management, - Bardzo dobrze komunikujesz się w języku angielskim (codzienna praca w międzynarodowym środowisku), - Potrafisz jasno raportować i prezentować dane, dostosowując przekaz do odbiorcy, - Myślisz innowacyjnie, nie boisz się zadawać pytań i kwestionować utartych schematów. Codzienne zadania: - Rozwój narzędzi do badania satysfakcji klientów, - Projektowanie i rozwijanie mikroserwisów w chmurze, - Automatyzacja procesów i pipeline’ów CI/CD, - Dbanie o jakość kodu, danych i dokumentacji, - Współpraca z zespołami Dev, Ops i Test, - Praca zgodnie z obowiązującymi standardami i governance.
Data Engineer
MindFly Technologies Kft.
⚲ Budapest
23 688 - 31 080 PLN (B2B)
Wymagania
- Communication skills
- Data engineering
- Data Vault
- Python
- Databricks
- dbt
- SQL
O projekcie: In dieser Rolle bist du für die technische Umsetzung und Qualität unserer Datenlösungen im Data-Warehouse-Umfeld verantwortlich. Du arbeitest selbstständig, strukturiert und lösungsorientiert und unterstützt sowohl interne Stakeholder als auch Projektteams bei der Umsetzung datengetriebener Anforderungen. Die Position bietet viel Gestaltungsspielraum, abwechslungsreiche Aufgaben sowie die Möglichkeit, an anspruchsvollen, langfristigen Digitalisierungsprojekten mitzuwirken und moderne Datenarchitekturen aktiv mitzugestalten. Erwartungshaltung ist immer eine Anwesenheit am Projektstandort (Österreich) von 1-3 Tagen im Monat. Reisekosten werden selbstverständlich von uns übernommen. Wenn dich das anspricht, melde dich bei uns – wir stellen dir gerne die Position und unser Unternehmen detaillierter vor. Wymagania: Wir sind ein junges, flach organisiertes Unternehmen im Bereich IT-Dienstleistungen mit Fokus auf Software-Testing, Softwareentwicklung und Datenlösungen. Zur Verstärkung unseres Teams suchen wir aktuell einen Data Engineer – entweder als Freelancer oder in Festanstellung. Qualifikationsprofil Erfahrungslevel: Advanced Das solltest du mitbringen: - Mehrjährige praktische Erfahrung in der Datenmodellierung im Data-Warehouse-Umfeld, idealerweise mit Schwerpunkt Data Vault - Fundierte Kenntnisse im Anforderungsmanagement sowie in der strukturierten Aufnahme und Analyse von Anforderungen - Erfahrung in der Abstimmung und Integration von Anliefersystemen - Sehr gute Kenntnisse in SQL sowie idealerweise Programmiererfahrung mit Python Zusätzlich von Vorteil: - Praxiserfahrung mit Databricks - Kenntnisse im Umgang mit dbt (data build tool) - Verständnis für bankfachliche Prozesse und Zusammenhänge Codzienne zadania: - Technische Umsetzung von Datenanforderungen im Data-Warehouse-Umfeld - Modellierung und Weiterentwicklung von Datenmodellen (Fokus: Data Vault) - Abstimmung mit Anliefersystemen und Sicherstellung der Datenintegration - Aufbau, Optimierung und Pflege von Datenpipelines - Qualitätssicherung und Validierung von Daten - Enge Zusammenarbeit mit Requirement Engineers, Fachbereichen und Projektteams - Analyse von Datenanforderungen und Übersetzung in technische Lösungen - Mitwirkung an langfristigen Digitalisierungs- und Datenprojekten
Cloud Data Engineer (AWS)
Reply Polska
⚲ Katowice
11 000 - 22 000 PLN (PERMANENT)
Wymagania
- AWS
- PySpark
- Git
- ETL
- Docker (nice to have)
- Kubernetes (nice to have)
- DevOps (nice to have)
- CI/CD (nice to have)
- Databricks (nice to have)
O projekcie: About Machine Learning Reply Machine Learning Reply offers customized end-to-end Data Science solutions, covering the entire project life cycle – from initial strategy consulting, data architecture and infrastructure topics, to processing data with quality assurance using Machine Learning algorithms. Machine Learning Reply has extensive expertise in the field of Data Science in all key industries of German HDAX companies. Machine Learning Reply empowers clients to successfully introduce new data-driven business models and to optimize existing processes and products – with a focus on distributed open source and cloud technologies. With the Machine Learning Incubator, the company offers a program to train the next generation of decision-makers, data scientists and engineers. Benefits - Motivizer Benefits Platform to choose and manage all your benefits in one place. You receive a budget (550 PLN monthly). You can choose medical care package, meal tickets, sports cards (we have Multisport and on preferential terms, we have membership cards to one of the most popular Gyms), cinema tickets, shop vouchers, discounts and many more. - Language Courses – you'll have access to a multi-language learning platform enabling you to practice you language skills and learn new ones! - Regular and systematic further training opportunities - both internally and from external providers. We support your ongoing learning and development. - Cooperation within an internal community is our everyday reality. We have networking events, coding challenges, and company parties for different occasions. Wymagania: - Educational Background: Bachelor’s or master’s degree in Computer Science, Engineering, IT, or a related field. - Strong experience with AWS cloud services. - AWS Certification (Associate or Professional level preferred). - Experience as a Data Engineer or ETL Developer. - Strong proficiency in Python programming. - Practical experience with PySpark and distributed data processing - Hands-on experience with AWS Glue. - Designing and maintaining ETL pipelines. - Good understanding of data lakes and data warehouse architecture. - Experience with version control systems (Git) and familiarity with CI/CD practices. - Good written and spoken English, Polish. - Openness to hybrid work and regular visits in Katowice office, once every two months. Codzienne zadania: - Design and implement cloud-based data processing solutions in AWS - Develop and maintain ETL/ELT pipelines. - Build and optimize data workflows using PySpark. - Work with AWS Glue jobs and data catalog. - Monitor, troubleshoot, and improve data pipelines performance. - Collaborate with teams to translate business requirements into data solutions. - Maintain documentation and best practices for data platforms.
Cloud Data Engineer
Reply Polska
⚲ Katowice
20 160 - 26 880 PLN (B2B)
Wymagania
- Python
- Java
- AWS
- Azure
- SQL
- NoSQL
- Big data
- Spark
- Kafka
- Flink
- Docker (nice to have)
- Kubernetes (nice to have)
- DevOps (nice to have)
- CI/CD (nice to have)
- Databricks (nice to have)
O projekcie: About Data Reply As part of the Reply group, Data Reply offers a wide range of services that support customers in becoming data driven. We operate in various industries and business areas and work intensively with our customers so that they can achieve meaningful results through the effective use of data. Data Reply offers many years of experience in transformation projects to achieve “data-driven companies”. We focus on the development of data platforms, machine learning solutions and streaming applications - automated, efficient, and scalable - without making any compromises in IT security. Benefits - Motivizer Benefits Platform to choose and manage all your benefits in one place. You receive a budget (550 PLN monthly). You can choose medical care package, meal tickets, sports cards (we have Multisport and on preferential terms, we have membership cards to one of the most popular Gyms), cinema tickets, shop vouchers, discounts and many more. - Language Courses – you'll have access to a multi-language learning platform enabling you to practice you language skills and learn new ones! - Regular and systematic further training opportunities - both internally and from external providers. We support your ongoing learning and development. - Cooperation within an internal community is our everyday reality. We have networking events, coding challenges, and company parties for different occasions. Wymagania: - Educational Background: Bachelor’s or master’s degree in Computer Science, Engineering, IT, or a related field. - Very good knowledge of English and Polish. - Programming Language: Strong programming skills in languages such as Python, Java or Scala. - Cloud: Experience with public cloud providers AWS/Azure. - Database Technologies: Knowledge of SQL and NoSQL databases. - Expertise in Big Data Technologies: Familiarity with big data frameworks and tools like Spark, Kafka, Flink or others, formats such as Apache Iceberg. - Data Modeling and Warehousing: Familiarity with data modeling and warehousing techniques. - Strong Analytical Skills: Ability to analyze complex data structures and derive insights to provide strategic guidance. - Excellent Communication: Strong interpersonal and communication skills to effectively collaborate with team members and clients. - Problem Solving: Strong problem-solving skills and the ability to propose creative, efficient solutions to complex problems. - Plus: Any additional experience with Docker, Kubernetes, DevOps practices, CI/CD, DBT. - Nice to have experience with Databricks, DBT. - Availability to work in a hybrid mode with at least 2 visits to the office per month. Codzienne zadania: - Data System Design: Design and implement robust, scalable data processing systems: this involves selecting appropriate storage technologies, designing schemas, and planning integration strategies. - Data Integration and ETL Development: Develop and maintain pipelines for data transformation, integration, and ETL processes. Ensure data quality and accessibility - Performance Optimization: Monitor, tune, and optimize data applications and database performance. Address any issues that may affect data processing speeds or analytics capabilities - Consulting and Strategy: Provide expert advice and consultancy services to clients on data strategies, architecture choices, and technological advancements - Analytics and Business Intelligence Support: Assist in developing analytics platforms and business intelligence solutions, ensuring that data can be effectively transformed into actionable insights - Client Interaction: Work closely with clients to understand their business needs and technical requirements. Translate these requirements into effective data engineering solutions.
Analityk Danych (K/M)
Antal
⚲ Warszawa
21 840 - 25 200 PLN (B2B)
Wymagania
- SDLC
- SQL
- Excel
- Power BI
- AI
- GitHub
- ChatGPT
- ITIL
- DevOps
O projekcie: 📍 Lokalizacja: Warszawa (model hybrydowy – min. 1 dzień w biurze tygodniowo) 💰 B2B: 130 – 150 PLN netto / h 🏦 Branża: Usługi finansowe / IT Enterprise Poszukujemy Analityka Danych do współpracy przy analizie efektywności procesów rozwoju IT w dużej, stabilnej organizacji z sektora finansowego. Rola łączy analizę danych, znajomość procesów IT oraz ścisłą współpracę z zespołami technicznymi i interesariuszami biznesowymi. Stanowisko daje realny wpływ na sposób mierzenia i poprawy efektywności rozwoju IT oraz transparentność raportowania. Oferujemy: - Długoterminową współpracę w dużym, stabilnym środowisku enterprise - Realny wpływ na procesy, wskaźniki i jakość raportowania - Dojrzałe środowisko procesowe i technologiczne - Pakiet Luxmed oraz karta MultiSport Wymagania: - 5–8 lat doświadczenia jako Analityk Danych, najlepiej w środowiskach korporacyjnych lub finansowych - Bardzo dobra znajomość procesu wytwórczego zmian w systemach IT (SDLC) - Praktyczne doświadczenie w pracy z narzędziami analitycznymi i raportowymi: SQL, Excel, Metabase, Power BI - Doświadczenie w wykorzystaniu narzędzi AI wspierających pracę analityczną (np. GitHub Copilot, ChatGPT) - Znajomość procesów i standardów IT: Incident & Problem Management, Change Management, Release & Deployment, Service Request Management (ITIL), DevOps - Umiejętność pracy na danych z narzędzi developerskich oraz definiowania wskaźników efektywności (np. DORA, SPACE) - Umiejętność tłumaczenia złożonych zagadnień w prosty i zrozumiały sposób - Wysokie zdolności komunikacyjne oraz chęć dzielenia się wiedzą Codzienne zadania: - Identyfikacja i analiza zbiorów danych pochodzących z narzędzi IT i developerskich, w tym ocena jakości danych - Tworzenie zapytań SQL oraz konfiguracja widoków i raportów dla wskaźników efektywności - Definiowanie sposobów mierzenia efektywności oraz mechanizmów raportowania - Monitoring efektywności, analiza trendów danych oraz identyfikacja ograniczeń wpływających na interpretację wskaźników - Opracowywanie metod prezentacji i raportowania zmienności efektywności rozwoju IT - Analiza procesów rozwoju IT oraz rekomendowanie i wdrażanie działań optymalizujących efektywność i doświadczenia zespołów wytwórczych - Tworzenie i utrzymanie wewnętrznej bazy wiedzy (definicje wskaźników, źródła danych, założenia i ograniczenia) - Bieżąca współpraca z zespołami IT i interesariuszami, w tym wyjaśnianie znaczenia wskaźników i zasad ich interpretacji
Senior MMM Data Scientist
hubQuest
⚲ Warsaw
28 500 - 37 000 PLN (B2B)
Wymagania
- Data science
- Machine learning
- Python
- Marketing Mix Modeling
- Time Series Analysis
- NumFOCUS
- pandas
- scikit-learn
- SciPy
- NumPy
- Matplotlib
- Azure (nice to have)
- Google Meridian (nice to have)
- LightweightMMM (nice to have)
- Robyn (nice to have)
- Bayesian Modeling (nice to have)
- MCMC (nice to have)
- MLOps (nice to have)
- Kubernetes (nice to have)
- Docker (nice to have)
O projekcie: We are a team of tech and data enthusiasts on a mission to bring together the best minds in IT and analytics. Our goal is to build cutting-edge analytical hubs that empower our partners to become truly data-driven organizations. Join one of the largest in-house MMM (Marketing Mix Modeling) teams in the industry, building next-generation analytics solutions at worldwide scale. As part of a flagship initiative within a leading organization operating in over 190 markets, our team is at the forefront of data-driven transformation and has strong visibility across the business. We are looking for a Senior Data Scientist with MMM expertise to help shape, develop, and deploy impactful analytical solutions as part of the Global Analytics Unit—a central, high-priority function driving innovation across the company. In this high-impact role, you will own the full data science lifecycle: from exploratory analysis and model development to refactoring, testing, deployment, and performance optimization. You will work alongside talented MMM professionals and make a truly global impact with your work. If you want to contribute to meaningful, large-scale projects and help our partners make smarter, data-driven decisions—this role is for you. Why join us If you’re looking for an opportunity to: - Tackle complex, real-world business challenges using advanced analytics and machine learning. - Build and deploy production-grade models that influence decisions across global markets. - Work in one of the largest and most skilled MMM teams in the industry. - Join a product-oriented FMCG environment that values innovation and technical excellence. - Use the latest GenAI tools, including GitHub Copilot, to make your work more efficient …this role might be the perfect fit. You will also benefit from: - Flexible working hours with a hybrid/remote work model. - Private medical care and a Multisport card. - Access to online learning platforms, certifications, and global knowledge exchange. - A relaxed, non-corporate atmosphere with strong respect for expertise and ownership. - A modern office in central Warsaw with excellent transport links. Wymagania: Requirements - 5+ years of professional experience in Data Science or ML Engineering, including production deployments. - At least 1 year of experience developing Marketing Mix Models (MMM) or working with Time Series Analysis. - MSc or PhD in Computer Science, Statistics, Mathematics, Physics, or a related field. - Strong Python skills and solid software engineering foundations (OOP, modular design, testing). - Strong experience with ML frameworks and the NumFOCUS ecosystem: pandas, scikit-learn, SciPy, NumPy, Matplotlib. - Familiarity with cloud computing (Azure experience is a plus). - Experience designing robust validation and monitoring strategies for production ML models. - Strong analytical mindset and the ability to think critically about model behavior. - Fluency in English (spoken and written), as you will collaborate globally. Nice to have - Experience with Google Meridian, LightweightMMM, or Robyn. - Experience with Bayesian modeling. - Knowledge of MCMC methods. - Understanding of CI/CD and MLOps pipelines, Docker/Kubernetes, and model deployment practices. - Experience working in agile, cross-functional environments. - Exposure to multi-tenant or global-scale data systems. - Publications, GitHub portfolio, or open-source contributions. Codzienne zadania: - Lead end-to-end MMM data science projects—from problem framing and data exploration to deployment and ongoing monitoring. - Develop, refactor, and productionize robust data preprocessing pipelines to ensure stability and scalability. - Design and integrate new features into our advanced MMM framework to improve performance and analytical capabilities. - Build and maintain reusable helper functions for hyperparameter optimization, feature selection, and model evaluation. - Improve model accuracy and diagnostics using weighted R², adjusted R², Xi correlation, RMSE, and MAE. - Automate baseline validation and implement standardized approaches for benchmarking model performance. - Work closely with Data Engineers and MLOps Specialists to deploy models into production within Azure-based infrastructure. - Collaborate with business stakeholders to translate analytical insights into clear, actionable recommendations.
Senior Database Engineer
Xebia sp. z o.o.
⚲ Rzeszów, Wrocław, Gdańsk
22 500 - 31 000 PLN (B2B)
Wymagania
- Python
- MongoDB
- PostgreSQL
- Kubernetes
- Helm
- HA concepts (nice to have)
- Cloud (nice to have)
O projekcie: Who We Are While Xebia is a global tech company, our journey in CEE started with two Polish companies – PGS Software, known for world-class cloud and software solutions, and GetInData, a pioneer in Big Data. Today, we’re a team of 1,000+ experts delivering top-notch work across cloud, data, and software. And we’re just getting started. What We Do We work on projects that matter – and that make a difference. From fintech and e-commerce to aviation, logistics, media, and fashion, we help our clients build scalable platforms, data and AI solutions, and cutting-edge applications to shape the future of tech. Our clients include McLaren, Aviva, Deloitte, Spotify, Disney, ING, UPS, Tesco, Truecaller, AllSaints, Volotea, Schmitz Cargobull, Allegro, InPost, and many, many more. We value smart tech, real ownership, and continuous growth. We use modern, open-source stacks, and we’re proud to be trusted partners of Databricks, dbt, Snowflake, Azure, GCP, and AWS. Fun fact: we were the first AWS Premier Partner in Poland! Beyond Projects What makes Xebia special? Our community. We support tech communities, organize meetups (Software Talks, Data Tech Talks), and have a culture that actively support your growth via Guilds, Labs, and personal development budgets — for both tech and soft skills. It’s not just a job. It’s a place to grow. What sets us apart? Our mindset. Our vibe. Our people. And while that’s hard to capture in text – come visit us and see for yourself. About Project In this role, you will provide technical direction for complex systems, shape engineering standards, and support a talented group of software and QA engineers — without direct line management responsibilities. Your work will combine hands-on engineering, technical guidance, and mentorship, with a strong focus on modern AI-driven approaches to code generation, software integrations, and developer productivity. Wymagania: Your profile: - 4+ years of experience managing large-scale, high-availability database systems (PostgreSQL / MongoDB), - experience with Python or Bash automation scripting, - proven experience with Kubernetes & Helm, - good verbal and written communication skills in English (min. B2+). - Work from the European Union region and a work permit are required. Nice to have: - knowledge of advanced PostgreSQL HA concepts (e.g., streaming replication, Repmgr/Patroni) and/or MongoDB sharding and replication, - experience with AWS, GCP, or Azure, - experience in using version control systems, configuration management tools and IaaC such as Terraform, CloudFormation, - experience using database tools such as pgAdmin, Pgbench, Robo3t, Studio3t, MongoDB Ops Manager and Mongo mirror, - experience with prometheus, cloudwatch and monitoring tools both within kubernetes and external cloud managed infrastructure. Codzienne zadania: - designing, implementing, and maintaining database infrastructure using StatefulSets, Operators, and Helm charts to ensure databases are reliable, self-healing, and scalable, - owning the deployment lifecycle for database clusters by managing version control for Helm charts and configuration templates, - supporting and administering production database systems by proactively instrumenting and monitoring performance, security, and availability within the containerized environment, - performing zero-downtime upgrades and migrations for major and minor releases, developing and maintaining Helm hooks and custom scripts to automate complex stateful operations, - managing and optimizing performance for backend data stores, ensuring data consistency and integrity across pod life cycles, - developing and maintaining automation tools and scripts (Bash, Python) specifically focused on simplifying Kubernetes management tasks, such as provisioning users/secrets and monitoring cluster state.
Senior Data Scientist, Applied AI
Instructure Hungary Ltd
⚲ Budapest
17 422 - 23 955 PLN (PERMANENT)
Wymagania
- Machine learning
- Applied AI engineering
- AI
- CI/CD
- AWS
- Python
- RAG
- Data science
- Deployment
- Applied Data Science
- Software Engineering
- Orchestration
O projekcie: At Instructure, we believe in the power of people to grow and succeed throughout their lives. Our goal is to amplify that power by creating intuitive products that simplify learning and personal development, facilitate meaningful relationships, and inspire people to go further in their education and careers. We do this by giving smart, creative, passionate people opportunities to create awesome things. Our Advanced Development team builds AI-native capabilities, reusable AI systems, and shared infrastructure that power multiple products and workflows across the platform. We are looking for a Senior Data Scientist to help build, ship, and scale production AI systems from the ground up. This is an engineering-forward ML/AI systems role, intended for candidates who are comfortable moving from prototype to production and can own critical parts of the ML/AI systems lifecycle, including pipelines, model integration, inference services, deployment, monitoring, and operational reliability. You will work closely with product, engineering, and research partners to turn advanced AI ideas into reliable product capabilities used at scale. Important note on scope: This role is not primarily focused on BI/reporting or experimentation analytics. We are looking for someone with strong experience building and operating production ML/AI systems. Growth & Impact - In This Role, You’ll Be Expected To - Influence architecture and engineering standards for AI systems - Shape reusable infrastructure and platform patterns - Mentor a growing team of AI/ML engineers - Turn advanced AI research into production value for educators and learners at scale Wymagania: What You’ll Need - 6+ years of experience in software engineering, machine learning engineering, applied AI engineering, or a closely related role with production ownership - Demonstrated experience taking ML/AI systems from prototype to production in live environments - Strong experience with deployment pipelines, CI/CD, orchestration, and operating production services on AWS - Experience building and operating APIs/services (Python preferred), working with containers, and debugging reliability/performance issues - Working knowledge of modern AI application patterns (for example, embeddings, retrieval, semantic search, or RAG) and the engineering constraints involved in running them in production - Strong communication skills and the ability to work through ambiguity across engineering, product, and research teams It Would Be a Bonus If You Had - Experience building AI-native product features (not just internal analytics models) - Experience with vector databases, retrieval infrastructure, or semantic indexing pipelines - Experience with graph databases or graph-based reasoning systems - Experience with observability and evaluation for LLM or retrieval systems (quality metrics, drift, failure analysis) - Experience creating internal engineering standards, templates, or reference implementations adopted by multiple teams - Experience in education technology, learning systems, or knowledge/skills modeling - Experience mentoring engineers in a high-growth or platform-building environment Codzienne zadania: - Architect, build, and deploy production ML/AI systems that power customer-facing product capabilities. - Design and operate scalable inference services, APIs, and backend components for model-driven applications. - Build and improve data, feature, deployment, and orchestration pipelines on AWS across development, staging, and production environments. - Productionize AI workflows with strong MLOps practices, including CI/CD, versioning, testing, monitoring, rollback, and operational reliability. - Define and implement evaluation frameworks for model quality, system reliability, latency, and cost, and use those signals to improve production performance. - Partner with product, research, and engineering teams to turn prototypes into robust, scalable services, while driving strong engineering standards in code quality, documentation, observability, and incident readiness.
Senior Data Engineer (Databricks | DataMarts | Cloud)
1dea
⚲ Warszawa, Kraków, Wrocław, Gdańsk
27 720 - 30 240 PLN (B2B)
Wymagania
- Data
- Data engineering
- Databricks
- Unity Catalog
- Big data
- Python
- SQL
- Cloud computing
- Azure
- AWS
- GCP
- Data warehouse
- Data Lake
- Data mesh
- Agile (nice to have)
- CI/CD (nice to have)
O projekcie: Poszukujemy doświadczonego eksperta, który dołączy do naszego zespołu i pomoże nam budować nowoczesne rozwiązania danych. Jeśli Databricks nie ma przed Tobą tajemnic, a architektura Big Data to Twoja pasja – czekamy na Ciebie! Prosimy o aplikacje CV w języku angielskim. --- Warunki zaangażowania: Obszar: Consulting IT Lokalizacja: 100% zdalnie Start: ASAP (akceptujemy kandydatury z max 3msc okresem wypowiedzenia) Stawka (ustalana indywidualnie): 165 - 180 PLN netto + VAT / h Zaangażowanie: B2B (outsourcing z 1dea), full-time, długofalowo Proces rekrutacyjny (100% zdalnie): - Krótka rozmowa telefoniczna informacyjno-zapoznawcza z rekruterem 1dea (~10 min) - Weryfikacja techniczno-projektowa z Klientem (~1 - 1,5 h) - (Jeśli OK) Rozmowa zapoznawcza z przedstawicielem projektu u Klienta końcowego (~ 30 min) - Decyzja o podjęciu współpracy Oferujemy - Zatrudnienie na podstawie umowy B2B na czas nieokreślony - Dołączysz do firmy z solidną pozycją na rynku - Firma zapewnia nowoczesny sprzęt, oprogramowanie i konfigurację - Możliwość pracy w elastycznych godzinach - Możliwość pracy zdalnej w 100% - Profesjonalne doradztwo i wsparcie w rozwoju kariery od doświadczonego zespołu specjalistów 1dea - Cenimy sobie koleżeńskość, otwartość, szacunek, wzajemną pomoc i wsparcie w rozwijaniu kompetencji zarówno własnych, jak i kolegów i koleżanek z zespołu - Wspieramy kulturę kreatywności. Każdy członek zespołu ma możliwość proponowania własnych pomysłów i rozwiązań, a jego głos jest zawsze brany pod uwagę Wymagania: - Minimum 5 lat doświadczenia w obszarze Data Engineering - Co najmniej 4 lata pracy z platformą Databricks (w tym znajomość Unity Catalog) - Minimum 2 lata doświadczenia w projektach typu Big Data - Biegłości w językach Python oraz SQL - Solidnego doświadczenia w Cloud Computing (Azure, AWS lub GCP) - Praktycznej wiedzy z zakresu: Data Warehouse, Data Lake, Data Mesh oraz relacyjnych i nierelacyjnych baz danych - Znajomości zasad inżynierii oprogramowania i rozproszonego przetwarzania danych - Bardzo dobrej znajomości języka angielskiego (w mowie i piśmie) - Analitycznego podejścia do rozwiązywania problemów w dużej skali Mile widziane: - Doświadczenie w pracy z metodykami Agile. - Znajomość narzędzi CI/CD i automatyzacji procesów wdrożeniowych Codzienne zadania: - Projektowanie i wdrażanie skalowalnych rurociągów danych (data pipelines). - Zarządzanie danymi z wykorzystaniem Unity Catalog. - Optymalizacja procesów ETL oraz modelowanie struktur danych w środowiskach rozproszonych. - Praca z różnorodnymi formatami danych (structured, semi-structured, unstructured). - Współpraca w zwinnym środowisku (Scrum/Kanban) zgodnie z praktykami DevOps i CI/CD
Senior Data Architect (AWS, Databricks)
Scalo
⚲ Warszawa
30 240 - 33 600 PLN (B2B)
Wymagania
- AWS Data Services
- Iceberg
- Glue
- CI/CD
- Athena
- SQL
- Snowflake
- Databricks
O projekcie: Cześć! U nas znajdziesz to, czego szukasz - przekonaj się! W Scalo czeka na Ciebie - praca 100% zdalnie, - stawka do 200 zł/h (B2B). Co dla Ciebie mamy? - Stabilna współpraca Wyzwania technologiczne i praca z nowoczesnymi rozwiązaniami. Internal Mobility - nie musisz zmieniać firmy, żeby zmienić projekt - Ambasadorzy SCALO Rozwijaj kompetencje techniczne i presalesowe, wpływaj na kierunek rozwoju organizacji i wspieraj innych - Budowanie marki osobistej Twórz wartościowe treści, dziel się wiedzą i doświadczeniem, występuj jako ekspert na wydarzeniach (np. ProMeet) - rozwijaj swoją markę w naturalny sposób - Benefit box Pełna opieka medyczna, karta MultiSport i szeroka oferta Motivizera - wybieraj benefity, które dbają o Twoje zdrowie, energię i codzienny komfort - Program poleceń Poleć znajomego z IT, a resztą zajmiemy się my – on dostaje nowy projekt, Ty bonus. Win–win w najczystszej postaci - Integracje i wydarzenia firmowe Wspólne wyjścia, firmowe eventy i okazje do poznania się poza projektami – bo dobre relacje tworzą najlepsze zespoły Brzmi interesująco? Aplikuj – czekamy na Twoje CV! Wymagania: Ta oferta jest dla Ciebie, jeśli: - masz minimum 7 lat doświadczenia w projektowaniu i implementacji architektury danych w złożonych środowiskach, - doskonale znasz AWS Data Services (Iceberg, Glue, Athena), - masz doświadczenie z danymi półstrukturalnymi, mapowaniem schematów i Medallion Architecture, - bardzo dobrze znasz SQL i optymalizację, - pracowałeś z Snowflake i Databricks, - masz doświadczenie w branży ubezpieczeniowej, - biegle komunikujesz się w języku angielskim (poziom min. B2/C1 – codzienna komunikacja z zespołem). Codzienne zadania: - definiowanie i dokumentowanie jednolitej architektury danych dla platformy ubezpieczeniowej, - sprawdzanie i ulepszanie architektury danych w AWS Data Lake (Medallion Architecture), - projektowanie i dokumentowanie wzorców dostępu do danych oraz interfejsów API, - przegląd wstępnych rozwiązań i doradzanie najlepszych sposobów ich realizacji, - budowa podstawowych modeli danych i komponentów, które mogą być wielokrotnie używane w systemie, - doradztwo w zakresie BI, ETL oraz narzędzi do zarządzania danymi,
MLOps Engineer
DCG
⚲ Warszawa
Wymagania
- PySpark
- AI
- Docker
- Pandas
As a recruitment company, DCG understands that every business is powered by experienced professionals. Our management style and partnership approach enable us to meet your needs and provide continuous support. Due to our ongoing growth and the large number of recruitment projects we undertake for our partners, we are currently looking for: About the project: Responsibilities: • Create continuous integration templates tailored for model development ensuring version control, testing, and reproducibility of our actuarial pricing models and datasets • Close work with members of the ML Engineering team and actuaries to audit and optimize the reliability and scalability of the actuaries' model training pipelines • Develop effective monitoring strategies to track the performance, reliability, and efficiency of the system • Manage the end-to-end operation of the AI platform to guarantee high availability, responsive performance, and secure data handling during document ingestion and processing • Oversee the integration and management of cloud resources to optimize cost, performance, and compliance with security standards, thereby enabling continuous innovation on the platform Requirements: • Bachelor's or Master's degree in Mathematics, Computer Science, Machine Learning, or related field • Mastery over Data Science frameworks (pandas, pyspark, sklearn and shap) and MLOPS frameworks (MLFlow, Kedro/Airflow, Hyperopt/Optuna and Great Expectations) in Python • Experience with building GenAI agentic workflows using Langchain or smolagents • Basic familiarity with Dashboarding tools (PowerBI/Tableau) • Strong understanding of DevOps methodologies (CI/CD) and experience implementing Github Actions (or similar) workflows • Experience with serving models with APIs using Flask or FastAPI • Experience with cloud platforms (e.g., AWS, Azure, GCP) and containerization (e.g., Docker, Kubernetes) • Extremely high attention to detail and rigor • English - at least B2 level Nice to have: • French - A2/B1 Offer: • Private medical care • Co-financing for the sports card • Constant support of dedicated consultant • Employee referral program
Data Warehouse/ETL Engineer
Scalo
⚲ Warszawa
23 520 - 26 880 PLN (B2B)
Wymagania
- ETL
- ELT
- Oracle
- PostgreSQL
- Google BigQuery
- Informatica
- Kafka
- PUB
- SUB
- Linux
- AutomateNow (nice to have)
- Python (nice to have)
- Airflow (nice to have)
- Spark (nice to have)
- Rust (nice to have)
O projekcie: Cześć! U nas znajdziesz to, czego szukasz - przekonaj się! W Scalo czeka na Ciebie - praca w modelu hybrydowym: minimum 2 razy w tygodniu prawa w biurze (Warszawa), - stawka do 160 zł przy współpracy B2B, zależnie od doświadczenia i zakresu kompetencji. Co dla Ciebie mamy? - Stabilna współpraca Wyzwania technologiczne i praca z nowoczesnymi rozwiązaniami. Internal Mobility - nie musisz zmieniać firmy, żeby zmienić projekt - Ambasadorzy SCALO Rozwijaj kompetencje techniczne i presalesowe, wpływaj na kierunek rozwoju organizacji i wspieraj innych - Budowanie marki osobistej Twórz wartościowe treści, dziel się wiedzą i doświadczeniem, występuj jako ekspert na wydarzeniach (np. ProMeet) - rozwijaj swoją markę w naturalny sposób - Benefit box Pełna opieka medyczna, karta MultiSport i szeroka oferta Motivizera - wybieraj benefity, które dbają o Twoje zdrowie, energię i codzienny komfort - Program poleceń Poleć znajomego z IT, a resztą zajmiemy się my – on dostaje nowy projekt, Ty bonus. Win–win w najczystszej postaci - Integracje i wydarzenia firmowe Wspólne wyjścia, firmowe eventy i okazje do poznania się poza projektami – bo dobre relacje tworzą najlepsze zespoły Brzmi interesująco? Aplikuj – czekamy na Twoje CV! Wymagania: Ta oferta jest dla Ciebie, jeśli: - masz znajomość zasad projektowania i budowy nowoczesnych hurtowni danych, - znasz aspekty związane z budową pipeline’ów zasilających ETL/ELT, - posiadasz ekspercką znajomość relacyjnych baz danych Oracle oraz PostgeSQL (projektowanie, budowa i tuning baz danych), - znasz Google BigQuery, - znasz Informatica Power Center, - rozumiesz technologie wykorzystywane w architekturze sterowanej zdarzeniami (np. Kafka, Pub/Sub), - swobodnie pracujesz z systemem Linux na poziomie zaawansowanego użytkownika. Mile widziane: - AutomatenNow, - Python, - AirFlow, - Rust2, - Apache Spark. Codzienne zadania: - projektowanie i budowa nowoczesnych hurtowni danych oraz rozwiązań klasy Data Lake, - tworzenie i rozwijanie procesów zasilania danych on-line i off-line (ETL/ELT), - analiza danych oraz przygotowanie dla nich modeli logicznych i fizycznych, - strojenie zapytań SQL na bazach relacyjnych i hurtownianych - on perm oraz w GCP, - tworzenie przepływów off-linowych i on-linowych zarówno w narzędziach on-perm, jak i GCP, - tworzenie rozwiązań ładowania danych w Python oraz Rust dla danych strukturalnych i niestrukturalnych, - przygotowywanie datamartów tematycznych na bazach relacyjnych i kolumnowych, - tworzenie koncepcji praz implementacji rozwiązań w oparciu o wymagania i potrzeby biznesu, - usuwanie incydentów i udział w rozwiązywaniu problemów, - wsparcie testów i wdrożeń w zakresie procesów zasilania danych, - pomoc przy serwisowaniu wdrożonego oprogramowania, - praca zgodnie z modelem DevSecOps,
Data Engineer (mid)
Link Group
⚲ Cracow
17 000 - 20 000 PLN (B2B)
Wymagania
- GCP
- BigQuery
- PUB
- dbt
- Cloud
- Oracle
- SQL
- Python
- Java
- Scala
- DevOps
- Jenkins
- GitLab
- Kubernetes
- Infrastructure as Code
- ETL
- Kanban
- Jira
- Terraform (nice to have)
- Data modelling (nice to have)
O projekcie: You will join an international consulting organization that supports both private and public sector clients in solving complex challenges related to people, processes, and digital transformation. Benefits: - medical care - multisport card - insurance Office location: Cracow Wymagania: - Strong hands-on experience with GCP tools (e.g., BigQuery, Dataflow, Pub/Sub, DataPlex, DBT) and modern cloud data ecosystems - Advanced Oracle SQL and PL/SQL skills, including work with complex stored procedures and large-scale datasets - Programming experience in languages such as Python, Java, or Scala - Familiarity with messaging, streaming, and DevOps tooling (e.g., Jenkins, GitLab) and container orchestration using Kubernetes - Practical experience with Infrastructure as Code solutions, preferably Terraform - Solid understanding of data modelling concepts, data warehousing architectures, and ETL/ELT best practices - Experience working in Agile delivery environments (Scrum, Kanban, Jira) - Exposure to financial markets, trading platforms, or high-performance data environments is considered a strong advantage Codzienne zadania: - Design, develop, and maintain scalable data pipelines within Google Cloud Platform environments Build and manage data integration and transformation processes connecting cloud services with on-premise Oracle databases - Collaborate closely with trading, risk, and analytics stakeholders to gather requirements and deliver both real-time and batch data solutions - Monitor and optimize data platform performance, particularly for latency-sensitive trading use cases - Work within Agile/Scrum teams to deliver business-critical data initiatives in cross-functional environments - Ensure proper data governance, lineage tracking, and regulatory compliance aligned with standards such as MiFID II and FCA requirements - Automate infrastructure and workflows using Infrastructure as Code, CI/CD pipelines, and containerization technologies like Docker and Kubernetes
Data Engineer (m/f/d)
PAYBACK
⚲ Warszawa
Wymagania
- ETL
- PySpark
- GBQ
- Terraform
- pytest
- Google Cloud Platform.
- Oracle SQL
- Python
🔵 PAYBACK is the world’s leading multi-partner loyalty program. As an international player, we operate in German, Italian, Mexican, and Austrian markets. More than 10 million active customers already use the German PAYBACK app and mobile PAYBACK services via their smartphones.Your responsibilities:• Develop and migrate data processing, ETL processes, data pipelines and analytics / data science algorithms using Python, GBQ, Oracle. • Develop tools and co-create an internal framework to automate work in the DATA area • Collaborate with other DATA teams (Data Analytics Consulting, Data Science/MLOps, Datawarehouse Engineering) in the joint execution of projects • Active participation in the internal community of experts in the evaluation and co-creation of internal work standards • Substantive support of other team members Your Profile:• A minimum of 3 years of Python programming experience in the creation, development and maintenance of high-quality applications / modules focused on Data Processing (ETL), Analysis and Data Modeling that utilizes object-oriented approach • Ability to develop, effectively test (pytest) and maintain clean, clear, and high-quality code • Willingness to acquire new knowledge and transfer knowledge to colleagues, skills in effective knowledge management and knowledge transfer (knowledge of Confluence and Mural is a plus) • Good working knowledge of ticket management environment (JIRA type) and distributed code management (git/BitBucket type). • Good knowledge of SQL (Oracle SQL or GBQ preferred) and PySpark • Fluency in spoken and written English: • -spoken and written English (B2) and Polish native-speaker or • -English (C1) (if not Polish native-speaker) • Your additional assets will be: • - Experience in Process Automation and maintenance (DevOps) using Airflow/CI-CD/Jenkins • - You are familiar with containerization techniques (preferred: Docker and Colima) • - Good knowledge of Google Cloud Platform (Terraform knowledge is a plus) How about?Employment contract? 📝 Of course. With us you do not have to worry about stable employment. Benefits? 🏋️♀️ We have them! Among other: corporate incentive program, sport card, private medical care. Lunch card? 💳 With the cooperation extended and permanent contract, you will receive additional funds to use for meal purchases. Working in a hybrid model? 🏠 Of course! You work with us 2 days a week from the office, 3 days a week from home. Work wherever you want?🌴 In PAYBACK you have the opportunity. Working 100% remotely, also from European countries for 10 days a year. Flexible working hours? ⏰ Sounds great! We start working between 8 to 10. Trainings? 🧠 Of course. We provide training to develop hard and soft skills. Convenient location? 🏠 Sure! We invite you to our new office at Rondo Daszyńskiego, but we are currently also working remotely. Dress code? 👕 We definitely say no. There are no rigid dress code rules in our company, sneakers are more than welcome. Friendly atmosphere at work? 🤝🏻 Yes! In PAYBACK, people are the most important asset. Something is missing? 🖐 Open communication is our priority, so dare to ask!
Data Analyst
Link Group
⚲ Remote
16 000 - 18 000 PLN (B2B)
Wymagania
- SQL
- Oracle
- Cloud
- Data warehouses
- GCP
- BigQuery
- AWS
- Athena
- Redshift
- Tableau
- BI
- Looker
- Python (nice to have)
- R (nice to have)
- Kanban (nice to have)
O projekcie: Benefits: - medical care - multisport card - insurance Wymagania: - Advanced SQL expertise, ideally including Oracle and cloud data warehouses such as GCP BigQuery, AWS Athena, or Redshift - Hands-on experience with business intelligence platforms (e.g., Tableau, Power BI, Looker, or similar tools) - Solid understanding of financial products, including equities, derivatives, and foreign exchange instruments - Familiarity with cloud-based data ecosystems, particularly GCP, considered a strong advantage - Basic programming or scripting knowledge in Python or R is beneficial - Strong interpersonal and communication skills, with the ability to collaborate effectively with non-technical stakeholders - Previous experience working in Agile environments such as Scrum or Kanban teams Codzienne zadania: - Examine and interpret large-scale financial data originating from trading platforms and market data providers to support risk management, compliance, and performance analysis teams Design and maintain interactive dashboards and reporting solutions using tools such as Tableau, Power BI, or Looker - Create and optimize advanced SQL queries across both on-premise Oracle databases and cloud environments like BigQuery, Redshift, or Athena - Partner closely with traders, risk specialists, and compliance teams to provide insights related to trading activity, risk exposure, and unusual patterns - Contribute within Agile delivery teams, continuously improving reporting capabilities and underlying data models - Maintain high standards of data quality by ensuring accuracy, consistency, and completeness across multiple systems
Big Data Engineer
Connectis_
⚲ Warszawa
26 880 - 35 280 PLN (B2B)
Wymagania
- Big data
- Spark
- Data mapping
- TDD
- BDD
- DevOps
- Apache Spark
- API
- Java
- Hive (nice to have)
- SQL (nice to have)
- Jenkins (nice to have)
- GitLab (nice to have)
- Git (nice to have)
- Python (nice to have)
- Airflow (nice to have)
- Azure AD (nice to have)
- ADLS (nice to have)
- Databricks (nice to have)
- SAFe (nice to have)
- REST API (nice to have)
- C (nice to have)
O projekcie: Razem z naszym Partnerem, jednym z europejskich liderów w branży bankowej, poszukujemy dwóch osób na stanowisko Big Data Engineer. Projekty dotyczą bankowości B2B i B2C ➡️ platform dla dużych kredytów, pożyczek profesjonalnych oraz aplikacji wspierających transformację cyfrową. ✨ OFERUJEMY: - Udział w spotkaniach integracyjnych i meetupach technologicznych, dzielenie się wiedzą i doświadczeniem z szerszą społecznością inżynierską. - Wsparcie dedykowanej osoby po stronie Connectis, która zawsze jest dostępna, by pomóc Ci w sprawach związanych z projektem. - Praca w międzynarodowym środowisku inżynierskim z wysoką kulturą jakości kodu (TDD, code review, clean code). - Udział w dużym, wieloletnim projekcie modernizacji i migracji do chmury, realny wpływ na architekturę systemu. - 🏢 Model hybrydowy, większość czasu w biurze (3 dni w tygodniu), elastyczność w pozostałe dni. - Codzienny kontakt z nowoczesnym stackiem Big Data w kontekście bankowości inwestycyjnej. - Sprzęt roboczy, laptop i peryferia zapewnione od pierwszego dnia. - Nowoczesne, komfortowe biuro w Warszawie. Dziękujemy za wszystkie zgłoszenia. Uprzejmie informujemy, że skontaktujemy się z wybranymi osobami. 12671/MG Wymagania: 🔍 CZEGO OCZEKUJEMY OD CIEBIE? - Min. 4 lata komercyjnego doświadczenia w data engineering / software engineering. - Zaawansowana znajomość Apache Spark (Scala API - preferowane, Java API - akceptowane). - Biegły angielski (C1+) ➡️ codzienna komunikacja techniczna i biznesowa w środowisku międzynarodowym. - Wykształcenie wyższe: Computer Science, Software Engineering lub pokrewne. - Solidne praktyki TDD ➡️ pisanie testowalnego kodu jako standard, nie wyjątek. - Praktyczna znajomość Hive i SQL na poziomie produkcyjnym. - Szukamy Seniora z potencjałem Tech Leada. Nice to Have: - CI/CD (Jenkins, GitLab CI lub podobne), Git flow, branching strategies. - Python (szczególnie w kontekście Airflow ➡️orkiestracja pipeline'ów). - Doświadczenie z Azure (ADF, ADLS, Databricks lub pokrewne usługi). - Doświadczenie w środowiskach SAFe / release train. - Znajomość architektury mikroserwisowej i REST API. Codzienne zadania: - Projektowanie i implementacja pipeline'ów do ingestion, transformacji i przetwarzania dużych wolumenów danych. - Industrializacja rozwiązań data processing ➡️ niezawodność, odporność na błędy, skalowalność. - Rozwój jobów Spark/Scala: przetwarzanie wsadowe, data mapping, optymalizacja wydajności. - Analiza źródeł danych i przekształcanie wymagań biznesowych w specyfikacje techniczne. - Udział w projektowaniu architektury Big Data i rekomendowanie rozwiązań technicznych. - Code review, dbałość o clean code, standardy TDD/BDD i dobre praktyki DevOps. - Współpraca w zespole Agile/Scrum (sprinty 2-3 tygodnie, release train). - Wsparcie L3 ➡️ troubleshooting i utrzymanie środowisk produkcyjnych. - Identyfikacja ryzyk technicznych oraz proponowanie strategii mitygacji. - Tworzenie dokumentacji technicznej, test planów i specyfikacji.
Senior SAP Analytics Developer (SAC / Snowflake / Tableau)
SquareOne
⚲ Remote
33 600 - 36 960 PLN (B2B)
Wymagania
- SAP
- SAP Analytics Developer
- SAC
- SAP Core Data Service
- CDS Views
- Table funcions
- Tableau
- Snowflake
- SQL
- Stakeholder management
- Agile/Scrum
- Enterprise finance processes
O projekcie: We are delivering an advanced Analytics landscape for a European enterprise organization operating in a complex financial and reporting environment. The project focuses on the design, development, and optimization of modern analytics solutions that support both strategic and operational decision-making. The role combines deep technical expertise with strong consulting and business analysis capabilities. The team is responsible for data architecture design, analytics modeling, dashboard development, and advisory support on Analytics best practices. The project is delivered in an Agile (Scrum) environment with close collaboration between business stakeholders and technical teams. Wymagania: - Several years of experience as a SAP Analytics Developer, with strong hands-on expertise in SAP Analytics Cloud (SAC). - Strong experience in SAP Core Data Services (CDS Views, table functions) and enterprise data modeling. - Proven experience working with Tableau for advanced analytics and visualization use cases. - Practical experience with Snowflake and strong SQL proficiency. - Demonstrated ability to analyze complex business requirements and design scalable technical solutions and architectures. - Solid consulting background with experience engaging stakeholders, proposing solutions, and managing expectations. - Experience in operating and maintaining analytics solutions, including performance optimization and troubleshooting. - Ability to produce high-quality functional and technical documentation. - Strong communication and stakeholder management skills. - Experience working in Agile Scrum environments. - Good understanding of enterprise finance processes. Codzienne zadania: - Design, develop, and maintain Analytics solutions using SAP Analytics Cloud (SAC). - Build and optimize robust data models using SAP Core Data Services (CDS Views, table functions). - Collaborate with business stakeholders to gather requirements and translate them into technical specifications and scalable system designs. - Develop advanced reporting and data visualizations using Tableau. - Leverage Snowflake cloud data warehousing capabilities, utilizing strong SQL skills for data transformation and insight generation. - Provide consulting support on Analytics architecture, development standards, and best practices. - Troubleshoot, optimize, and support deployed Analytics solutions. - Prepare and maintain functional and technical documentation. - Actively participate in Scrum ceremonies and contribute to Agile delivery practices. - Support enterprise finance processes through enhanced reporting and analytical capabilities.
Senior Data Platform Governance Specialist
IQVIA
⚲ Warszawa
156 000 - 289 000 PLN brutto (UoP)
Wymagania
- Azure
- GitLab
- Terraform
- ServiceNow
- Snowflake
- Databricks
Role Summary The Senior Data Platform Governance Specialist drives governance excellence, operational discipline, and cost predictability across global data platforms. This role champions a “Governance-as-Code” approach, ensuring that platforms such as Snowflake, Databricks, and Cloudera are secure, audit-ready, and optimized by default. The position acts as a bridge between Platform Engineering, FinOps, Security, and Leadership, enabling engineering velocity without compromising enterprise risk standards. Key Responsibilities 1. Strategic Governance and Policy-as-Code (“Golden Rules”) • Define, develop, and maintain “Golden Paths” and global standards for data platform architecture, usage, and compliance. • Translate written governance policies into automated, executable checks (Governance-as-Code) using GitLab CI/CD, Terraform, ServiceNow, and platform-native security controls. • Maintain a version-controlled, audit-ready repository of governance policies ensuring traceability and seamless compliance for regulated workloads. 2. Analytics, Auditing, and Executive Reporting • Design, build, and deliver monthly Best-Practice Scorecard Reports to track platform health, cost anomalies, and compliance drift. • Present actionable compliance, security, and FinOps metrics to senior IT and business leadership, highlighting risks and optimization opportunities. 3. Engineering Enablement and Culture • Conduct training sessions, architectural forums, and 1:1 coaching to upskill data engineers and architects on best practices. • Partner with engineering teams to ensure compliance controls are embedded early in the deployment lifecycle (Shift-Left). 4. Cross-Functional Alignment • Serve as the central governance liaison across Platform Engineering, FinOps, Cloud Security, and other teams. • Ensure platform configurations align with corporate data privacy standards (e.g., GDPR, HIPAA) and industry regulations. Required Knowledge, Skills, and Abilities • Platform Expertise: 7+ years of hands-on experience managing, architecting, or governing at least two major enterprise data platforms (e.g., Cloudera CDP, Snowflake, Databricks, Oracle, SQL Server). • Cloud Infrastructure: Strong foundational knowledge of AWS and/or Azure infrastructure concepts (compute sizing, storage classes, data transfer/egress economics). • Regulatory Acumen: Proven experience creating and enforcing technical standards in highly regulated, global enterprises (Pharma, Life Sciences, Healthcare, or Finance preferred). • Modern Tooling: Hands-on familiarity with automation and ITSM tools (GitLab CI, Terraform, ServiceNow). • Frameworks: Solid knowledge of ITIL service management and Policy-as-Code concepts. • Leadership and Influence: Excellent communication and stakeholder management skills; able to drive change without direct authority and push back constructively when needed. Minimum Education and Certifications • Education: Bachelor’s degree in Computer Science, Information Technology, Systems Engineering, or related field. • Highly Preferred Certifications: • ITIL 4 Foundation or Managing Professional • Platform-specific certifications (Snowflake, Databricks, AWS/Azure, Cloudera) IQVIA Core Competencies • Results Orientation • Collaboration and Teamwork • Strategic Problem Solving • Adapting to Change
Senior Data Engineer
SquareOne
⚲ Remote
21 840 - 26 880 PLN (B2B)
Wymagania
- AWS
- Azure
- Snowflake
- dbt
- witboost
- dreamio
O projekcie: We are currently looking for Senior Data Engineers to join our Swedish customer within the automotive industry. Wymagania: - Minimum 5+ years of experience in Data Engineering - Experience across multiple large-scale assignments of varying complexity - Proven hands-on delivery of pipelines and data products using: AWS, Azure, Snowflake, dbt, Witboost, Dreamio - Strong expertise in dbt and modern ELT/ETL practices - Experience with CI/CD pipelines and automated testing in data environments - Experience working within Data Mesh setups - Practical application of DataOps principles in production - Strong understanding of: Data governance, Data contracts, Data modeling patterns, Reusable standards / golden paths Codzienne zadania: - Build, maintain, and optimize ETL/ELT processes in AWS, Azure, and Snowflake environments. - Implement transformations and workflows using dbt and other relevant tools. - Ensure data is accessible, accurate, and consistent across the organization. - Monitor pipeline performance and respond quickly to production issues. - Automate data testing and deployment processes. - Maintain high code quality and DevOps standards for the data team.
Senior Backend Developer / Data Engineer (Snowflake, SQL, dbt)
AVENGA (Agencja Pracy, nr KRAZ: 8448)
⚲ Remote
26 880 - 30 240 PLN (B2B)
Wymagania
- SQL
- Snowflake
- dbt
O projekcie: We are currently looking for a Backend Developer / Data Engineer to join a data and analytics initiative focused on building modern, scalable data solutions. This role combines hands-on data engineering with close collaboration with business stakeholders. Wymagania: - Strong hands-on experience with SQL - Practical experience with Snowflake - Experience working with dbt (data build tool) - Background in backend development or data engineering - Solid understanding of data modeling principles - Good communication skills and ability to work with non-technical stakeholders - Analytical mindset and problem-solving skills - Experience in data analytics or reporting environments - Exposure to business analysis activities (requirements clarification, stakeholder discussions) - Ability to bridge technical and business perspectives Codzienne zadania: - Designing and developing data models and transformations using Snowflake, SQL, and dbt - Building and maintaining scalable and efficient data pipelines - Optimizing query performance and data processing workflows - Ensuring data quality, consistency, and reliability - Collaborating with analysts and business stakeholders to clarify requirements - Translating business needs into well-structured technical data solutions - Supporting data-driven decision-making through high-quality datasets
Scala Spark Developer
GFT Poland
⚲ Kraków
14 580 - 22 350 PLN (PERMANENT)
Wymagania
- Scala
- Spark
- Google cloud platform
- Apache Airflow
- Testing
- Product development
- GitHub
- Airflow
- Python
- Jira
- Confluence
- Communication skills
- GCP (nice to have)
- JUnit (nice to have)
- TDD (nice to have)
- BDD (nice to have)
- Jenkins (nice to have)
- Clean Code (nice to have)
- DRY (nice to have)
- Elasticsearch (nice to have)
- Cloud (nice to have)
- Azure (nice to have)
- Amazon Web Services (nice to have)
O projekcie: As a Scala Spark Developer, you will work on large-scale data processing solutions within Google Cloud Platform, contributing to one of the largest Data Lakes in the world. You will design and develop data pipelines using Apache Spark and validate your solutions with Apache Airflow. You will be part of an agile, international team operating in a highly regulated financial environment. We value engineers who can challenge the status quo while ensuring full compliance with existing standards and regulations. We offer you - Hybrid work in Kraków office. - Working in a highly experienced and dedicated team - Benefit package that can be tailored to your personal needs (private medical coverage, sport & recreation package, lunch subsidy, life insurance, etc.) - On-line training and certifications fit for career path - Access to e-learning platform - Mindgram - a holistic mental health and wellbeing platform - Work From Anywhere (WFA) - the temporary option to work remotely outside of Poland for up to 140 days per year (including Italy, Spain, the UK, Germany, Portugal, and Bulgaria) - Social events Wymagania: Your skills: - Very good knowledge of Scala and Spark - Experience with GitHub for version control - Basic experience with Airflow and Python - Familiarity with Jira and Confluence - Experience working in Scrum teams - Good communication skills in English (discussion and presentation) Nice to have: - Hands-on experience with GCP and Apache Airflow - Knowledge of testing approaches and tools (JUnit, TDD, BDD) - CI/CD experience (e.g. Jenkins) - Strong engineering mindset: clean code, SOLID, KISS, DRY, design patterns - Experience with Elasticsearch - Exposure to at least one major cloud provider (Microsoft Azure or Amazon Web Services) - Previous Secure Code Warrior certifications Codzienne zadania: - Deliver sprint goals by actively contributing to backlog execution - Collaborate closely with software developers across multiple locations (including UK and China) - Work in line with the Scrum framework and agile best practices - Maintain and improve code quality through testing, refactoring and bug fixing - Support Product Owners in refining and understanding technical backlog items - Share knowledge within the team and across the wider organization - Participate in the full end-to-end product development lifecycle, including production releases
Migration Environment Owner
AVENGA (Agencja Pracy, nr KRAZ: 8448)
⚲ Gdańsk, Gdynia
21 840 - 25 200 PLN (B2B)
Wymagania
- SAFe
O projekcie: As a Migration Environment Owner, you will be responsible for the lower and higher environments used by Customer, Migration, and ADS (Arrangement Directory Services) ART in the Core Banking program. This ART delivers: - Account migrations from current ledger systems to the new ledger system (N-core) with minimal customer impact. - Improvements, fixes, and data clean-up to ensure accurate customer records are available in N-core for new product offers and migrations. - Full set of arrangements for consumer usage of ADS. Wymagania: - Several years of experience managing IT projects or services in a complex IT landscape (mainly related to coordinating IT environments and migration activities rather than typical project management). - Fluent English for daily work. - Strong relationship-building, collaboration, and presentation skills. - Ability to understand technical aspects of environments and resolve issues with managers and technical experts. - Experience or understanding of agile methodologies. - Proactive and systematic mindset. - Availability to work 3 days per week in the office (Gdańsk/Gdynia). Codzienne zadania: - Take overall responsibility for lower (Dev, Test) and higher (PreProd) Migration environments. - Support preparations for Go Live events in Production, including weekend execution. - Proactively identify and manage external dependencies, risks, and impediments impacting ART deliveries. - Provide migration environment requirements to environment providers for procurement, configuration, and maintenance. - Plan, schedule, and govern migration environments, including Trial Conversion schedules, refreshes/imports, and technical support alignment. - Manage user access for all environments and resolve scheduling conflicts between ARTs. - Collaborate frequently with other ARTs, teams, and stakeholders. - Engage with Retail Technology Services to ensure environment refreshes, code installations, and DB imports meet requirements. - Monitor and consider the state of Core Banking test environments (INT1, PERF) when planning test events. - Drive Core legacy teams for environment-related work. - Manage tactical vendor relationships, including coordinating IBM work for ADS and Migration environments. - Maintain Jira tickets, enabler features, and external dependencies related to environments. - Facilitate ART ceremonies and processes to support team delivery.
Data Scientist (M/K)
Antal
⚲ Warszawa
25 200 - 28 560 PLN (B2B)
Wymagania
- Python
- pandas
- SQL
- Matplotlib
- ggplot2
- WTW (nice to have)
- SAS (nice to have)
- Git (nice to have)
- AWS (nice to have)
- Azure (nice to have)
O projekcie: Profil: Data Scientist Miejsce świadczenia usług: model hybrydowy – 3 razy w tygodniu praca z biura (Warszawa) + 2 razy w tygodniu praca zdalna – warunek konieczny. Forma współpracy: B2B. kontrakt Stawka: 150 - 170 zł/h Benefity: - Prywatna opieka medyczna - Karta Multisport lub dostęp do kafeterii MyBenefit - Ubezpieczenie na życie Wymagania: a. Must-have: - Bardzo dobra znajomość języka Python w zakresie analizy danych i modelowania. - Solidna wiedza z obszaru uczenia maszynowego, analityki predykcyjnej oraz wnioskowania statystycznego. - Doświadczenie w pracy z narzędziami do przetwarzania danych (np. pandas, SQL) oraz wizualizacji danych (np. matplotlib, seaborn, ggplot2). - Umiejętność pracy z repozytorium kodu. - Silne kompetencje analityczne: rozwinięte umiejętności rozwiązywania problemów i dociekliwość w pracy z danymi. - Umiejętność efektywnej komunikacji i współpracy w zespołach interdyscyplinarnych. - Dyspozycyjność do pracy w biurze 3 razy w tygodniu. b. Nice-to-have: - Znajomość narzędzi pricingowych WTW (Emblem, Radar). - Znajomość branży ubezpieczeniowej. - Znajomość narzędzi SAS. - Znajomość systemów kontroli wersji (Git). - Znajomość środowisk chmurowych (AWS, Azure). Codzienne zadania: - Projektowanie, wdrażanie oraz utrzymanie modeli statystycznych i modeli uczenia maszynowego wspierających funkcje aktuarialne. - Analiza dużych i złożonych zbiorów danych w celu formułowania praktycznych wniosków wspierających procesy decyzyjne. - Prowadzenie eksploracyjnej analizy danych (EDA) oraz tworzenie cech (feature engineering) w celu zwiększenia jakości i interpretowalności modeli. - Wsparcie automatyzacji i optymalizacji procesów aktuarialnych z wykorzystaniem nowoczesnych narzędzi oraz frameworków data science. - Ścisła współpraca z aktuariuszami, underwriterami oraz zespołami likwidacji szkód w celu przekładania potrzeb biznesowych na rozwiązania analityczne. - Przygotowywanie prezentacji wyników analiz i rekomendacji w sposób przystępny dla odbiorców technicznych.
Data Scientist
Ework Group
⚲ Budapest, Warszawa, Gdynia, Gdańsk
20 160 - 25 200 PLN (B2B)
Wymagania
- Python
- SQL
- AWS (nice to have)
- Azure (nice to have)
O projekcie: Dla naszego klienta z branży ubezpieczeniowej, poszukujemy specjalisty na stanowisko Data Scientist . Miejsce świadczenie usług: hybrydowo: 3 x w tygodniu biuro Warszawa + 2 x w tygodniu zdalnie – warunek konieczny Wymagania: - Biegłość w Pythonie w zakresie analizy danych i modelowania. - Dobra znajomość metod uczenia maszynowego, analityki predykcyjnej oraz wnioskowania statystycznego. - Doświadczenie w pracy z narzędziami do przetwarzania danych (np. pandas, SQL) oraz wizualizacji danych (np. matplotlib, seaborn, ggplot2). - Umiejętność pracy z wykorzystaniem repozytorium kodu. - Analityczne myślenie: silne umiejętności rozwiązywania problemów i pasja do odkrywania wniosków płynących z danych. - Komunikacja: umiejętność współpracy w interdyscyplinarnych zespołach. Wymagania nice to have: - - Znajomość narzędzi pricingowych WTW (Emblem, Radar) - Znajomość branży ubezpieczeniowej - Znajomość narzędzi SAS - Znajomość systemów kontroli wersji (Git) - Znajomość środowisk chmurowych (AWS, Azure) Codzienne zadania: - Opracowywanie, wdrażanie i utrzymywanie modeli statystycznych oraz uczenia maszynowego wspierających funkcje aktuarialne. - Analiza dużych i złożonych zbiorów danych w celu uzyskania praktycznych wniosków wspierających proces podejmowania decyzji. - Przeprowadzanie eksploracyjnej analizy danych i tworzenie cech (feature engineering) dla poprawy jakości modeli i ich interpretowalności. - Wspieranie automatyzacji i optymalizacji procesów aktuarialnych z wykorzystaniem nowoczesnych narzędzi i frameworków data science. - Ścisła współpraca z aktuariuszami, underwriterami i zespołami likwidacji szkód w celu przekładania wyzwań biznesowych na rozwiązania analityczne. - Przygotowywanie prezentacji wyników i rekomendacji w sposób zrozumiały dla odbiorców technicznych.
Data Platform Architecture Expert
IQVIA
⚲ Warszawa
218 000 - 460 000 PLN brutto (UoP)
Wymagania
- Data Lake
- Google Cloud Platform
- Data Warehousing
- Cloudera
- Azure
- Snowflake
- Databricks
- Amazon Web Services
External Job Description IQVIA is seeking a talented and motivated individual to join our Data Office as a member of a dedicated three-person team supporting the Data Platform Architecture Pillar. This position will focus on critical initiatives within our Center of Excellence (CoE) for Data Platforms, with a central mandate to reduce friction in deploying and operating data platforms. The role will directly contribute to the implementation of strategies defined by the Data Platform Architecture Pillar, ensuring our platforms are robust, scalable, and aligned with organizational objectives. Primary Goals and Objectives • Reduce time, cost, and complexity associated with deploying and operating data platforms across IQVIA • Ensure all data platform initiatives align with enterprise architecture standards and strategies • Promote and enforce use of standardized deployment patterns, templates, and operational best practices • Enhance collaboration and knowledge sharing between data platform stakeholders • Drive continuous process improvements to increase platform reliability, performance, and user satisfaction Role Responsibilities • Support the Data Platform Architecture Pillar by operationalizing platform strategies and architectural patterns • Organize and facilitate activities in the Center of Excellence for data platforms, acting as a subject matter expert and key contributor • Identify and address points of friction in platform deployment and operations, recommending and implementing effective solutions • Collaborate with platform engineering and platform management to align platform solutions with organizational requirements • Contribute to the development and refinement of platform deployment frameworks, automation scripts, and operational playbooks • Monitor platform performance and provide guidance on operational best practices • Document, share, and promote standards, templates, and reusable assets developed by the Center of Excellence • Assist in evaluating and integrating new data platform technologies, tools, and services • Provide training and support to platform users and other internal stakeholders to facilitate smooth adoption and usage • Support continuous improvement initiatives by collecting feedback and leveraging lessons learned to refine processes and standards • Meet with data platform vendors to understand their roadmap and discuss gaps in feature sets that are needed to meet IQVIA’s requirements Key Qualifications • Bachelor’s degree in Computer Science, Information Systems, Engineering, or related field; advanced degree preferred • 3+ years of experience in data platform engineering, architecture, or operations • Strong foundational knowledge of cloud-based data platforms (e.g., AWS, Azure, GCP) and modern data architecture patterns (data lakes, warehouses, lakehouses, data estates, etc.) • Experience in deploying, supporting, and operating scalable, secure, and high-availability data platforms • Strong communication skills for documenting solutions and collaborating across technical and non-technical teams • Detail-oriented with a passion for operational excellence and process improvement • Excellent problem-solving skills, with an ability to analyze complex technical issues and develop actionable solutions • Proficiency with scripting, automation tools, and infrastructure-as-code technologies (e.g., Terraform, Ansible, CloudFormation, BiCep) • Familiarity with CI/CD, DevOps practices, and platform monitoring tools • Experience working in a Center of Excellence or similar shared service environment is an asset
Business Intelligence (Power BI) Engineer
CLOUDFIDE
⚲ Warszawa, Kraków, Katowice
Wymagania
- Power BI
- Azure
- DAX
- Databricks
- SQL
Why Join Us At Cloudfide, you’ll work where data truly has impact, scale, and purpose. We design and deliver advanced Big Data and cloud-native data platforms for international clients, including major players in retail and Fortune 500.We process massive datasets, operate fully in the cloud ecosystem, and use the latest technologies, including AI, ML, and MLOps. Opportunity overview As a Mid Power BI Developer, you will play a key role in designing and delivering impactful analytics solutions across our clients' cloud data platforms. You'll work closely with both technical teams and business stakeholders, translating complex data into clear, actionable insights. Expect cutting-edge cloud environments (Azure-first), well-structured semantic models, and high-visibility dashboards used across international organizations. Your Responsibilities • Designing, building, and maintaining Power BI reports, dashboards, and semantic models for global clients across diverse industries. • Developing and optimizing DAX measures, calculated columns, and data models to ensure accuracy, performance, and scalability. • Collaborating with Data Engineers to understand data structures and ensure BI layer integration with upstream pipelines and cloud data platforms. • Working with business stakeholders to gather requirements, define KPIs, and translate them into clear, intuitive visualizations. • Maintaining data quality and consistency across reports, with attention to lineage, documentation, and best practices. • Contributing to the development of BI standards, naming conventions, and reusable component libraries. • Participating in CI/CD practices and version control for Power BI assets (e.g., via Azure DevOps, git integration). • Staying up-to-date with the Power BI ecosystem and proposing improvements to enhance the analytics offering. What We're Looking For • 2+ years of hands-on experience in Power BI development in a commercial environment. • Strong DAX proficiency - ability to write complex, optimized measures and understand evaluation context. • Solid understanding of data modeling concepts: star schema, snowflake schema, relationships, and cardinality. • Experience connecting Power BI to a variety of data sources including Azure Synapse, SQL databases, Databricks, or data lakehouses. • Advanced SQL skills - comfortable querying and transforming data at the source level. • Ability to work independently, communicate with stakeholders, and take ownership of analytics deliverables. • Good understanding of report performance optimization and Power BI service deployment (workspaces, row-level security, dataflows). Nice to Have • Experience with Power BI Premium or Fabric. • Familiarity with Azure data ecosystem (Azure Synapse, Azure Data Factory, ADLS). • Exposure to Python or PySpark for data preparation. • Knowledge of CI/CD for Power BI assets (Azure DevOps, Tabular Editor, ALM Toolkit). • Basic understanding of data engineering concepts and Lakehouse architecture. Why You’ll Love Cloudfide • Benefits: MyBenefit platform / Multisport • Enel‑Med private medical care • Professional Growth: • Annual 2,000 PLN development budget • Access to e‑learning platforms • Real opportunities to expand responsibility and lead initiatives - most of our Leads come from internal promotions • As the company continuously grows, you get real opportunities to expand your responsibilities, explore new areas, and even take ownership of your own initiatives. • Work Environment: • Long-term cooperation: UZ or B2B (we value stable, lasting relationships) Remote-first, work from anywhere in the world (workation) • Flexible hours & strong focus on work-life balance • Collaboration with top engineers you can truly learn from • Flat structure - every voice counts, and you help shape the company • Culture & Integration: • Company-wide trips (including international ones) • Team integration budgets • Open communication and supportive, passionate teams Equal Opportunity Cloudfide is an Equal Opportunity Employer. We value diversity and are committed to creating an inclusive environment for all employees.
Administrator Baz Danych
Antal
⚲ Warszawa, Łódź, Olsztyn, Katowice
21 840 - 21 840 PLN (B2B)
Wymagania
- Oracle
- PostgreSQL
- RAC
- Linux
- Windows Server
- VMware
- KVM
- Apex
- ORDS
- WebLogic (nice to have)
- Forms&Reports (nice to have)
- OEM (nice to have)
- Exadata (nice to have)
- Cloud (nice to have)
O projekcie: Administrator Baz Danych (Oracle / PostgreSQL / MS SQL) Lokalizacja: praca w 100% stacjonarna, z biura: Warszawa, Łódź, Olsztyn, Katowice Stawka: - umowa o pracę: 19 000 – 20 000 zł brutto miesięcznie - B2B: 130 pln/h Poszukujemy doświadczonego Administratora Baz Danych, który dołączy do naszego zespołu i wesprze nas w utrzymaniu oraz rozwoju kluczowych środowisk bazodanowych. Jeśli lubisz pracować z zaawansowanymi technologiami, cenisz stabilność, a jednocześnie chcesz realnie wpływać na cyfrową transformację, ta rola będzie dla Ciebie. Oferujemy: - Możliwość pełnego wykorzystania i rozwijania swoich kompetencji. - Jasno określoną ścieżkę rozwoju opartą o skill matrix. - Wsparcie i wdrożenie w ramach 3‑miesięcznego onboardingu. - Pracę w zespole opartym na partnerskiej współpracy. - Spotkania integracyjne działowe i firmowe. - Benefity: ZFŚS, kafeteria MyBenefit, dofinansowanie karty sportowej, ubezpieczenie na życie i lekowe, zniżki na produkty marek własnych. - Realny wpływ na rozwój i transformację cyfrową w sektorze farmacji. Wymagania: - Kilkuletnie doświadczenie w administrowaniu bazami danych Oracle oraz umiejętność samodzielnego zarządzania PostgreSQL. - Praktyczna znajomość Oracle RAC, Data Guard oraz administracji PostgreSQL. - Znajomość systemów Linux i Windows Server. - Mile widziane doświadczenie z VMware / KVM / PowerVM. - Umiejętność budowania rozwiązań HA/DR oraz automatyzacji procesów. - Doświadczenie w migracjach i upgrade’ach baz danych. - Znajomość metod optymalizacji i tuningu wydajności. - Umiejętność diagnozowania problemów i dostarczania rozwiązań naprawczych. - Doświadczenie w administracji ORDS/APEX (konfiguracja, upgrade, monitoring). - Znajomość zasad licencjonowania środowisk bazodanowych. Mile widziane: - Znajomość Oracle WebLogic i Forms&Reports. - Umiejętność analizy wydajnościowej systemów operacyjnych. - Znajomość ZFS Storage Appliance oraz zagadnień infrastrukturalnych. - Doświadczenie z OEM, Oracle Exadata, Oracle Cloud. Codzienne zadania: - Administracja i patchowanie środowisk bazodanowych: Oracle (90%), Microsoft SQL (5%), PostgreSQL (5%). - Zarządzanie strukturami baz danych: tabele, indeksy, widoki, sekwencje, funkcje, procedury, role, konta użytkowników. - Projektowanie, wdrażanie i utrzymanie strategii backupu oraz odzyskiwania danych (RMAN, DataPump, narzędzia PostgreSQL, testy recovery). - Obsługa rozwiązań wysokiej dostępności i Disaster Recovery (DataGuard, Exadata RAC, RMAN, replikacja PostgreSQL). - Tworzenie, klonowanie i utrzymanie środowisk rozwojowych i testowych. - Analiza logów, alertów i zdarzeń pod kątem bezpieczeństwa, zgodności i wydajności. - Proponowanie i wdrażanie rozwiązań zwiększających dostępność i wydajność środowisk. - Tworzenie i rozwijanie narzędzi automatyzujących zadania administracyjne (skrypty, Ansible, bash/ksh/powershell/python). - Nadzór nad prawidłowością licencjonowania środowisk bazodanowych. - Współpraca z zespołami SRV i NET.
SQL Developer / Developerka (PostgreSQL)
Sorigo
⚲ Warszawa
Wymagania
- PostgreSQL
- BigQuery
- SQL
- Python
- PL/pg SQL
W Sorigo zajmujemy się wdrażaniem innowacyjnych rozwiązań w obszarze IT. Mamy kilkunastoletnie doświadczenie we współpracy partnerskiej z dużymi klientami. Wierzymy także, że praca powinna dawać ludziom satysfakcje, a nie być tylko obowiązkiem. Choć czasami znalezienie odpowiedniego dla siebie miejsca wymaga czasu i dobrego ukierunkowania przez bardziej doświadczonych kolegów. Każdy pracownik przychodzi do nas po to aby się rozwinąć. Interesuje nas Twój rozwój, bo z tego czerpiemy satysfakcję, a nasi klienci korzyści. SQL Developer / Developerka (PostgreSQL) Lokalizacja: hybrydowo/Warszawa Jeżeli: • Świetnie znasz SQL i PL/pgSQL (PostgreSQL) i potrafisz wykorzystać go w pracy z dużymi bazami danych • Masz doświadczenie w pracy z bardzo dużymi wolumenami danych i wiesz, jak je efektywnie przetwarzać • Potrafisz optymalizować zapytania i długotrwałe procesy, nawet te trwające wiele godzin • Umiesz analizować plany wykonania zapytań i szybko wykrywać wąskie gardła • Znasz pracę z procedurami składowanymi i potrafisz je efektywnie wykorzystywać • Swobodnie korzystasz z Pythona, utrzymując i rozwijając logikę biznesową • Radzisz sobie z rozwiązywaniem problemów wydajnościowych na poziomie L2 • Potrafisz analizować dane i problemy pochodzące z różnych źródeł • Masz doświadczenie w pracy na poziomie L2, czyli w rozwiązywaniu problemów i utrzymaniu procesów w środowisku produkcyjnym • Mile widziane doświadczenie z BigQuery i integracją różnych baz danych • Biegle mówisz po angielsku - B2/C1 A może dodatkowo: • masz doświadczenie z BigQuery i integracją różnych baz danych Pomyśl, czy: • lubisz pracować w przyjaznej atmosferze? • chcesz czerpać satysfakcję z pracy w inspirujących projektach tworzonych przez ludzi z pasją? • nie boisz się złożonych zagadnień, mając świadomość, że one napędzają Twój rozwój? • chcesz być doceniany nie tylko za wyniki, ale i za swoje zaangażowanie? Opowiedziałeś twierdząco na większość pytań? Zapraszamy! Co u nas na Ciebie czeka? • Praca w projektach opartych o PostgreSQL, hurtownie danych i systemy przetwarzające bardzo duże wolumeny informacji • Rozwój kompetencji w obszarze DWH, integracji danych i rozwiązań bazodanowych • Jasny plan kariery i wsparcie w Twoim rozwoju zawodowym na poziomie L2 • wyzwania technologiczne i rozwojowe m.in. w zakresie tematyki Data i analizy danych; • pakiet socjalny: opieka medyczna, ubezpieczenie, karta sportowa; • praca z najnowszymi technologiami i trendami rynku; • konkurencyjne wynagrodzenie; • bezkonkurencyjna atmosfera w pracy. Liczymy, że pochwalisz się w CV swoim doświadczeniem - co do tej pory robiłeś, od małych projektów po duże komercyjne, bo doświadczenie jest ważne, ale ważniejszy potencjał Twojego rozwoju.
Data Analyst
Bloober Team S.A.
⚲ Kraków, Warszawa, Wrocław, Poznań, Gdańsk
Wymagania
- R
- Looker Studio
- BigQuery
- Unreal Engine 5
- SQL
- Python
Job description Looking to break the mold and build unforgettable games? Join a game dev studio run by actual gamers. Bloober Team is not your usual layoff-prone studio. We've got a solid foundation with big plans for the future and believe strongly in creating a collaborative environment where every voice matters.Here, you'll have the chance to co-create and leave your mark on the gaming industry, working on exciting new projects that all share Bloober Team’s unique horror-infused DNA. What sets us apart? • Inclusive culture: We believe in fostering a welcoming atmosphere where everyone feels valued and empowered. • Remote-friendly options: We offer flexibility, allowing you to contribute your best work from wherever you thrive. • Solid foundation: As a financially stable company with a clear vision, we invest heavily in our team and our projects. • Passionate team: You'll be surrounded by dedicated individuals who share your love for creating compelling horror experiences. Currently we are looking for: Data Analyst Apply now, this is more than just a job. This is an opportunity to become a part of something truly special! Below you will find some information on the role: • Cooperation with various internal and external teams on analytics and telemetric implementation in games, • Conduct advanced analyses in order to understand players behaviour and evaluate the game mechanics by navigating databases, extracting relevant data, building dashboards, and troubleshooting data-related issues throughout the process, • Running ad-hoc analyses to address the urgent business and/or development needs, • Preparing actionable recommendations based on conducted analyses. Job requirements Feel free to apply if you have: • 3+ years of experience in working as a product analyst, data analyst, BI or similar, • Experience with Unreal Engine 5, • Proficiency in SQL with foundational Python/R knowledge for data analysis, • Experience with Looker Studio and BigQuery or similar platforms, • Capablity of effective presenting results both orally and visually, with a strong grasp of data visualization methods and best practices, • Logical, analytical, and methodical approach to problem-solving, • Passion for games. • Drive to learn and develop in game analytics and related fields, • Very good command of English.
Senior Platform Engineer
Cyclad
⚲ Poland (Remote)
160 - 180 PLN/h netto (B2B)
Wymagania
- Powershell
- Terraform
- Databricks
- Azure DevOps
- Python
In Cyclad we work with top international IT companies in order to boost their potential in delivering outstanding, cutting-edge technologies that shape the world of the future. We are seeking an experienced Platform Engineer to join our client, an IT company specializing in AI consultancy. You will have a key role in designing, building, and operating enterprise-grade data platforms that enable analytics teams to deliver real business value. You will be responsible for the Azure foundation behind Business Intelligence and Data solutions — from secure networking and identity, through Infrastructure as Code, to Databricks operations and CI/CD automation. This role is ideal for someone with a strong platform/SRE mindset who enjoys building scalable cloud architectures, automating everything, and acting as a technical advisor for analytics and BI teams. Project information: • Work Setup: 100% Remote work • Rate: 160 – 180 PLN/net/h + VAT (B2B contract) • Start date: March 2026/ depending on candidate availability • Language: English (min B2+) Project scope: • Design and operate enterprise Azure data platforms (identity, networking, storage, security). • Build Infrastructure as Code with Terraform (modules, environments, Azure + Databricks providers). • Create and maintain CI/CD pipelines (Azure DevOps / GitHub Actions / GitLab). • Manage Databricks from the platform side (clusters, jobs, secrets, repos, permissions). • Implement monitoring and observability (Azure Monitor / Log Analytics). • Support analytics teams by ensuring stability, performance, and security. • Troubleshoot production issues and drive RCA. • Automate operations using PowerShell and/or Python. • Act as a platform advisor for BI and data teams. Requirements: • Strong admin-level experience with Microsoft Azure • (Entra ID / Azure AD, RBAC, networking, ADLS Gen2, Key Vault). • Commercial experience with Terraform (modular, multi-env setups). • Hands-on CI/CD using Azure DevOps, GitHub Actions or GitLab. • Practical platform knowledge of Databricks (clusters, jobs, governance). • Platform/SRE mindset (monitoring, incidents, reliability). • Scripting skills (PowerShell/Python). • Experience with enterprise cloud architectures. • English & Polish proficiency. • Power BI background We offer: • Full-time job agreement based on B2B • Private medical care with dental care (covering 70% of costs) + rehabilitation package. Family package option possible • Multisport card (also for an accompanying person) • Life insurance
Senior Data Architect
Craftware
⚲ Warszawa
180 - 220 PLN/h netto (B2B)
Wymagania
- Azure Databricks
- Azure
- Data Architecture & Data Modeling
- SQL
- Python
- Apache Spark
Craftware is a technology company of over 500 experts, empowering large organizations to solve complex business challenges with modern IT solutions – from sales systems and automation to data platforms and AI. We operate where technology must be reliable, secure, and scalable. We deliver end-to-end projects: from analysis and architecture through implementation to development and maintenance. We are a trusted partner of industry leaders such as Salesforce, Veeva, UiPath, and Databricks. Model: remoteEmployment type: full-time Role The Data Assets, Analytics & AI department at a leading global Consumer Health organization focuses on enabling digital transformation by creating best-in-class analytical solutions, supporting data-driven decision-making and performance optimization across the organization. You will be part of the global Data Assets, Analytics & AI platform and will be responsible for technically guiding projects in the Data & Analytics domain. You will partner with business stakeholders, data scientists, analytics leads, data and platform architects, engineers, as well as implementation partners and vendors, acting as the technical lead for scalable Core Data Assets and Data Products. You will ensure proper data management, data ingestion, pipeline orchestration, and the delivery of high-quality data for analytical consumption. If you are interested in joining a young and dynamic team driving digital transformation within a global Consumer Health environment, we would love to hear from you. Responsibilities • Own and lead the technical implementation and management of well-integrated, enriched, and reusable data models and analytics products, • Collect and manage data requirements from data consumers, translate them into technical concepts with a focus on reusability and cost efficiency, and collaborate with data providers, • Ensure best-practice data management and quality assurance processes are defined, applied, and continuously improved. • Collaborate with data architects, analytics leads, and data scientists to promote data reuse and ensure sustainable pipelines and analytical products, • Partner with global and regional business stakeholders as well as IT teams (data architecture, analytics, data science, platform product managers, integration architects) to ensure delivery aligned with business priorities and IT objectives, • Lead the integration of multiple data sources to develop globally harmonized data models and KPIs that meet market requirements, • Guide delivery teams (primarily data and BI engineers, internal and external), provide cost and timeline estimations, challenge assumptions, and ensure high-quality delivery, testing, and proper handover to support teams, • Contribute to best-practice knowledge sharing within the data architecture community and support the continuous improvement of the data analytics technology stack by providing feedback and project requirements. Qualifications • Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field, • 5+ years of experience in Data & Analytics, preferably within the CPG industry, • Strong expertise in data architecture and data engineering, particularly with Spark-based processing solutionsTech stack: Azure Data Factory, Azure Databricks, SQL, Python (Snowflake is a plus), • Proficiency in data modeling techniques with the ability to mentor and support data engineers in best practices and implementation strategies, • Experience with reporting and visualization tools such as Tableau or Power BI, • Solid understanding of data management practices (data cataloging, data quality management, data compliance). • Experience with the GitHub ecosystem, including version control, GitHub Actions (CI/CD), and pull request workflows. Knowledge of Azure DevOps is a plus, • Experience delivering data as a product and leading data engineering teams, • Knowledge of commercial domain data (e.g., CRM, eCommerce, Sell-Out), • Experience working with Agile methodologies (Scrum, Kanban), • Strong analytical and problem-solving skills combined with sound business judgment, • Excellent interpersonal, consulting, communication, and presentation skills, • Fluent English (written and spoken), intercultural awareness (Europe, US, India), and willingness to travel occasionally. We offer • B2B contract, • Daily support from team leaders, • Dedicated certification budget, • Assistance in defining and support in your development path, • Benefits package, • Integration trips/events.
PostgreSQL Administrator (k/m)
MODIVO PLATFORM
⚲ Warszawa, Wrocław, Zielona Góra, Polkowice
Wymagania
- PostgreSQL
- CI/CD
- GCP
- SQL
Jaki będzie Twój zakres zadań: • instalacja, konfiguracja i aktualizacja serwerów PostgreSQL, • monitorowanie wydajności i dostępności baz danych, • tworzenie i wdrażanie polityk backupu oraz odzyskiwania danych, • optymalizacja zapytań SQL i struktur baz danych, • zarządzanie bezpieczeństwem i kontrolą dostępu do danych, • współpraca z zespołami deweloperskimi przy wdrażaniu nowych rozwiązań, • rozwiązywanie problemów technicznych związanych z bazami danych, • automatyzacja zadań administracyjnych, • tworzenie dokumentacji technicznej, • szkolenie użytkowników w zakresie korzystania z baz danych. Jakie są nasze oczekiwania: • minimum 2 lata komercyjnego doświadczenia na podobnym stanowisku, • doświadczenie w pracy z systemami CI/CD, • bardzo dobra znajomość PostgreSQL, • znajomość narzędzi do monitorowania i automatyzacji, • umiejętność optymalizacji zapytań SQL, • znajomość zagadnień bezpieczeństwa baz danych, • doświadczenie w pracy z dużymi zbiorami danych, • doświadczenie w pracy z systemami wysokiej dostepności jak systemy Ecomerce, • umiejętność analitycznego myślenia, • samodzielność i dobra organizacja pracy, • znajomość języka angielskiego na poziomie technicznym, • znajomość zagadnień związanych z replikacją danych w bazach PostgreSQL, • mile widziane certyfikaty PostgreSQL oraz doświadczenie w chmurze GCP. Co z nami zyskasz: • pracę w silnej i dynamicznie rosnącej firmie o polskich korzeniach oraz międzynarodowym zasięgu, • możliwość kreowania znanych i lubianych brandów należących do naszego portfolio: CCC, eobuwie, HalfPrice, Boardriders, • nowoczesne środowisko pracy wpisane w nasz model biznesowy - działamy w branży przyszłości o omnichannelowym podejściu, • stabilne zatrudnienie w oparciu o umowę o pracę lub możliwość współpracy B2B, • zniżki na atrakcyjne zakupy w naszych brandach: CCC, eobuwie, HalfPrice, • dostęp do prywatnej opieki medycznej z możliwością rozszerzenia pakietu dla Twoich najbliższych, • możliwość posiadania karty Multisport w wariancie dopasowanym do Twoich potrzeb oraz dostęp do platformy BeActive, • możliwość dołączenia do grupowego ubezpieczenia na życie na preferencyjnych warunkach, • program poleceń pracowniczych - możesz otrzymać bonus finansowy za polecenie kogoś, kto dołączy do naszego zespołu.