Oferty pracy
Filtry
Data Migration Engineer – doświadczenie w branży ubezpieczeniowej
Upvanta
⚲ Remote
25 200 - 29 400 PLN (B2B)
Wymagania
- Python
- Apache Airflow
- EIS (nice to have)
O projekcie: Do zespołu Data Advanced Engineering poszukujemy osoby na stanowisko Data Migration Engineer. Osoba na tym stanowisku będzie odpowiedzialna za projektowanie, planowanie oraz realizację migracji danych ubezpieczeniowych z istniejącej platformy do nowego systemu. Oferujemy: - start projektu: ASAP - pracę zdalną - udział w ciekawym, strategicznym projekcie w branży ubezpieczeniowej Wymagania: Wymagania: - praktyczna znajomość Python - doświadczenie z Apache Airflow - obowiązkowa znajomość branży ubezpieczeniowej, w szczególności - doświadczenie w projektach związanych z migracją danych Mile widziane: - znajomość EIS Codzienne zadania: - projektowanie i planowanie procesu migracji danych - implementacja i wykonanie migracji danych między systemami - analiza danych oraz mapowanie struktur między platformami - współpraca z zespołami technicznymi i biznesowymi w celu zapewnienia poprawności danych - dbanie o jakość i spójność migrowanych danych
Data Engineer – Central Monitoring Platform (CMP)
emagine Polska
⚲ Lisbon
Wymagania
- Documentation
- Microsoft Platform
- GxP (Good anything Practice)
- Operations
- maintenance
- SQL
- Python
- Spark
- PySpark
- Microsoft Azure
We are looking to onboard one additional Data Engineer to support our Central Monitoring Platform (CMP). The primary objective is to increase capacity for business-as-usual operations while contributing to smaller initiatives and improvement activities for the remainder of the year. This is not a lead role. The consultant will join an existing setup with experienced Data Engineers and contribute hands-on in a regulated environment. Assignment Overview The consultant will provide data engineering services supporting CMP delivery and operations, including but not limited to: • Development, maintenance, and operation of data pipelines • Onboarding of new data, studies, and clinical trials • BAU support such as incident and ticket handling, root-cause analysis, and operational follow-ups • Participation in smaller initiatives and scoped projects • Documentation of pipelines and data flows in line with agreed quality standards The assignment will be shaped continuously based on priorities. Must-Have Technical Skills • Strong hands-on experience with PySpark • Proven experience with Azure and Databricks • Strong Python and SQL skills (Spark SQL experience is relevant) • Experience building and operating production-grade data pipelines • Domain and compliance experience • Experience from clinical or regulated environments is strongly preferred, ideally from GxP-relevant contexts or environments with strong documentation, quality controls, and auditability Working Style Hands-on, pragmatic and delivery-oriented profile. Comfortable working with both BAU operational tasks and smaller improvement initiatives. Able to work independently while collaborating closely with the team. Team Context The consultant will join our Tech Team, which acts as an enabler team in our setup. The team currently consists of a Solution Architect, Validation Lead, System Manager and two Data Engineers, distributed across Denmark, Croatia, and India, and made up of a mix of external consultants. Practical Information • Start: As soon as possible • Duration: Until end of year
Data Engineer (Azure Databricks)
ITEAMLY SPÓŁKA Z OGRANICZONĄ ODPOWIEDZIALNOŚCIĄ
⚲ Kraków
155–190 zł netto (+ VAT) / godz.
Wymagania
- PySpark
- Databricks
- SQL
- Data Factory
- Azure SQL
- Storage
- Microsoft Power BI
Nasze wymagania: 5+ years of experience in Data Engineering Strong hands-on experience with: PySpark (DataFrames, optimization techniques); Databricks; SQL Experience with Azure services: Data Factory; Azure SQL / Storage Knowledge of cloud environments (preferably Azure) Strong problem-solving skills and ability to work independently Good communication skills in English (B2+) Mile widziane: Power BI Azure certifications (AZ-900 or similar) Databricks certification O projekcie: We are looking for an experienced Data Engineer with strong PySpark skills to join a large-scale Azure data platform project. You will work in a distributed environment, focusing on building and optimizing data pipelines. Zakres obowiązków: Develop and maintain data pipelines using PySpark and SQL Work with Azure Databricks for large-scale data processing Build and maintain ingestion and data transformation workflows Perform data wrangling and integrate multiple data sources Optimize Spark jobs (DataFrames, partitioning, clustering, SparkSQL) Work with data formats such as Delta, Parquet, and CSV Collaborate with stakeholders across multiple time zones Create and maintain technical documentation Oferujemy: 100% remote work International project environment Modern data stack (Azure + Databricks) Fast recruitment process Long-term cooperation
Data Engineer (Azure / Databricks)
ITEAMLY SPÓŁKA Z OGRANICZONĄ ODPOWIEDZIALNOŚCIĄ
⚲ Warszawa
145–190 zł netto (+ VAT) / godz.
Wymagania
- Databricks
- PySpark
- SQL
- ADF
- Storage
- SQL DB
- Lakehouse
- Medallion
- Git
- Azure DevOps
- Microsoft Power BI
Nasze wymagania: 2–6 years of experience in Data Engineering Strong experience with: Databricks ecosystem; PySpark & SQL Experience with: Azure Data Services (ADF, Storage, SQL DB); Data modeling and ETL/ELT design Understanding of Lakehouse / Medallion architecture Experience with CI/CD (Git, Azure DevOps) Strong communication and stakeholder management skills English level: B2+ Mile widziane: Power BI Unity Catalog Experience in enterprise-scale projects O projekcie: We are looking for a Data Engineer (mid-level) to support the development of a modern data platform in Azure. This role combines hands-on engineering with collaboration in a client-facing environment. Zakres obowiązków: Design and develop scalable data pipelines using Databricks, PySpark, Python, and SQL Build and maintain ETL/ELT workflows (Azure Data Factory) Work with modern data architectures (Data Lake / Lakehouse) Integrate and transform data from multiple sources Collaborate with business and technical stakeholders Participate in CI/CD processes for data workflows Support data modeling and analytics integration (e.g. Power BI) Oferujemy: Work in a hybrid model (Warsaw office) Opportunity to join a large-scale, global data project Modern cloud-based tech stack Stable contract with extension Fast onboarding process
Data Engineer
SQUARE ONE RESOURCES sp. z o.o.
⚲ Warszawa
120–160 zł netto (+ VAT) / godz.
Wymagania
- Python
- GCP
- AWS
- SQL
- Docker
- Java
- Terraform
- DBT
- CI/CD
Nasze wymagania: Openness to engage in continuous learning and agility in addressing business challenges; Experience in designing, developing, documenting, and maintaining software written in Python and/or Java as well as dataflows written in SQL Knowledge of different Data Modelling principles and selecting the right one depending on the needs of a consumer Feeling comfortable working with Linux, git, and Docker to extend and maintain our CI/CD workflows and pipelines or are you eager to dive into this topic Experience in working in cloud environments (preferably GCP or AWS) Experience or knowledge of Infrastructure-as-Code applications like Terraform and data transformation tools as dbt and Dataform Cultural awareness, sensitivity, and adaptability in building connections with people from diverse backgrounds Excel conceptual thinking and a proactive approach to decision-making, taking ownership to ensure project success. Zakres obowiązków: Work within Agile SCRUM methodologies using the Atlassian toolchain (JIRA, Confluence, GitHub, Git) Design, develop, and maintain a modern cloud-based data warehouse on Google Cloud Platform (GCP) Build and enhance data pipelines using Python, Java, and BigQuery SQL Leverage GCP services such as Cloud Build, Data Transfer, Workflows, Scheduler, and Cloud Functions Develop and manage data models and transformation layers using tools like dbt Cloud Implement and maintain Infrastructure as Code (IaC) using Terraform Contribute to the development of a scalable data platform supporting analytics, BI, and machine learning use cases Support the organization’s digital transformation by delivering modern data solutions Proactively propose and implement improvements in data architecture and engineering practices Collaborate with a cross-functional, international team including BI, Data Science, DevOps, and Software Engineering
Data Engineer
SquareOne
⚲ Remote
20 160 - 26 880 PLN (B2B)
Wymagania
- Python
- SQL
- Data modelling
- Linux
- AWS
- GCP
- Docker
- Terraform (nice to have)
- dbt (nice to have)
O projekcie: We're looking for Data Engineer to join international, and cross-functional team Wymagania: - Openness to engage in continuous learning and agility in addressing business challenges; - Experience in designing, developing, documenting, and maintaining software written in Python and/or Java as well as dataflows written in SQL - Knowledge of different Data Modelling principles and selecting the right one depending on the needs of a consumer - Feeling comfortable working with Linux, git, and Docker to extend and maintain our CI/CD workflows and pipelines or are you eager to dive into this topic - Experience in working in cloud environments (preferably GCP or AWS) - Experience or knowledge of Infrastructure-as-Code applications like Terraform and data transformation tools as dbt and Dataform - Cultural awareness, sensitivity, and adaptability in building connections with people from diverse backgrounds - Excel conceptual thinking and a proactive approach to decision-making, taking ownership to ensure project success. Codzienne zadania: - Work within Agile SCRUM methodologies using the Atlassian toolchain (JIRA, Confluence, GitHub, Git) - Design, develop, and maintain a modern cloud-based data warehouse on Google Cloud Platform (GCP) - Build and enhance data pipelines using Python, Java, and BigQuery SQL - Leverage GCP services such as Cloud Build, Data Transfer, Workflows, Scheduler, and Cloud Functions - Develop and manage data models and transformation layers using tools like dbt Cloud - Implement and maintain Infrastructure as Code (IaC) using Terraform - Contribute to the development of a scalable data platform supporting analytics, BI, and machine learning use cases - Support the organization’s digital transformation by delivering modern data solutions - Proactively propose and implement improvements in data architecture and engineering practices - Collaborate with a cross-functional, international team including BI, Data Science, DevOps, and Software Engineering
Data Engineer
IN4GE sp. z o.o.
⚲ Kraków, Warszawa, Wrocław
140–180 zł netto (+ VAT) / godz.
Wymagania
- AWS
- Google Cloud Platform
- Microsoft Azure
- Apache Spark
- Kafka
- Hadoop
- PostgreSQL
- MySQL
- MongoDB
- Cassandra
- Airflow
- Azure Databricks
- Spark
- PySpark
- SQL
- Prometheus
- Grafana
Nasze wymagania: Minimum 3 lata doświadczenia w obszarze Data Engineering. Znajomość co najmniej jednej platformy chmurowej: AWS, GCP lub Azure. Doświadczenie w pracy z rozwiązaniami big data, takimi jak Apache Spark, Kafka lub Hadoop. Umiejętność pracy z relacyjnymi bazami danych (np. PostgreSQL, MySQL) oraz bazami NoSQL (np. MongoDB, Cassandra). Praktyczna znajomość narzędzi ETL/ELT, np. Airflow lub dbt. Mile widziane: Certyfikaty z obszaru technologii chmurowych. Znajomość Azure Databricks, Spark oraz PySpark. Doświadczenie w modelowaniu danych i optymalizacji zapytań SQL. Znajomość narzędzi monitorujących, takich jak Prometheus i Grafana. Zrozumienie zasad CI/CD oraz praktyczne doświadczenie z narzędziami DevOps. O projekcie: Poszukujemy doświadczonej osoby na stanowisko Data Engineer do współpracy przy tworzeniu i rozwijaniu nowoczesnych rozwiązań przetwarzania danych dla naszych klientów. To rola dla osoby, która dobrze odnajduje się w środowisku projektowym i chce mieć realny wpływ na projektowanie oraz wdrażanie skalowalnych systemów danych. Na tym stanowisku będziesz współpracować z zespołami Data Science, Analityki oraz IT, wspierając rozwój rozwiązań opartych na danych. Zakres obowiązków: Projektowanie, rozwój i utrzymanie hurtowni danych, pipeline’ów ETL/ELT oraz systemów przetwarzania dużych zbiorów danych. Optymalizacja wydajności procesów przetwarzania danych. Tworzenie i rozwój infrastruktury danych w środowiskach chmurowych (AWS, GCP, Azure) lub on-premise. Współpraca z zespołami analitycznymi i biznesowymi w celu zbierania wymagań oraz identyfikacji potrzeb. Wdrażanie rozwiązań wspierających jakość danych oraz bezpieczeństwo danych. Oferujemy: Współpracę w modelu B2B. Możliwość udziału w międzynarodowych projektach. Ambitne i rozwojowe środowisko projektowe. Wsparcie merytoryczne na każdym etapie wdrożenia. Dostęp do nowoczesnych technologii i narzędzi. Model współpracy hybrydowy lub stacjonarny w jednej z lokalizacji: Kraków, Wrocław, Warszawa.
Data Analyst (Audio AI & Automotive)
AdTonos
⚲ Słupsk
20 000 - 22 500 PLN brutto (UoP)
Wymagania
- Data Science
- SQL
- Python
- Analityczne Myślenie
- Machine Learning
- Statistics
- IoT
- DSP
- MongoDB
Data Analyst (Audio AI & Automotive) • Start Date: ASAP • Location: Poland • Work Model: Hybrid (remote work with occasional in-person collaboration at our office in Słupsk) • Employment Type: Employment Contract (Umowa o Pracę) preferred The Role We are looking for a Data Analyst to join our R&D team. You will be a key figure bridging the gap between raw vehicle signals and high-level marketing intelligence. Your primary focus will be identifying the "mobility signals" that define a driver's psychographic profile. Key Responsibilities • Identify Key Micro-events: Analyze low-level signals from the CAN bus and V-HAL (Vehicle Hardware Abstraction Layer) to define critical events (e.g., braking frequency, fuel levels, trip types) necessary for accurate user profiling. • Develop "Ground Truth" Methodology: Create a robust reference dataset by mapping raw telemetry data to structured user declarations from surveys. • Noise Reduction: Collaborate with Data Engineers to develop filters that insulate the system from information noise resulting from different car makes and models. • Edge Optimization: Help define the "micro-events" that allow the Edge module to reduce data transfer and while maintaining low CPU load. Requirements • Proven experience in Data Science or Technical Data Analysis. • Strong ability to work with time-series data and sensor telemetry. • Experience in building Ground Truth datasets and validating ML models (Confusion Matrix, Precision, Recall, F1-score). • Familiarity with the Automotive sector or IoT (knowledge of CAN bus/V-HAL is a major plus). • Analytical mindset capable of correlating physical signals (e.g., driving dynamics) with psychographic segments (e.g., "premium class," "family"). • Strong interpersonal skills, with the ability to communicate complex technical findings effectively to both technical and non-technical stakeholders. • Good command of written and spoken English. What We Offer • Participation in a high-tech R&D project moving from TRL 2 to TRL 7. • The opportunity to shape a "Privacy-by-Design" product that redefines the global AdTech industry. • Work with a modern stack including Python, ML frameworks (XGBoost, LSTM), and specialized automotive tools. Perks Of Being Part Of Our Team • Opportunity to participate in ambitious projects where your ideas will make an impact and lead the market • Work in one of the fastest developing and most innovative sectors in the advertising and programmatic area • Work in a friendly and inspiring atmosphere in an international environment • Work in small self-organizing teams where people and relations are truly important Ready to Make Waves in Audio Tech? Join AdTonos and be a part of something big. Our recruitment process is streamlined to just one meeting, making it quick and straightforward for you to get started. Apply now and help us shape the future of audio advertising!
Cloud Data Engineer (Azure)
Verita HR
⚲ Remote
20 000 - 22 000 PLN (B2B)
Wymagania
- Azure
- SQL
- Python
- ETL
- Data engineering
- Azure Data Factory
- Azure Databricks
O projekcie: 📍 Client: european bank 🗣️ Recruitment: phone screen with our recruiter + 1 on-line meeting the with hiring managers 🗺️ Remote work with availability within Central European Time (CET) Verita HR is an international company providing recruitment support within #Fintech, #Finance and #Banking market in EMEA. We connect the most innovative organizations with the best people in the market. We conduct systematic market research, which allows our Digital Teams to be a step ahead of the competition. This role sits at the heart of a large digital transformation within a European banking institution. Data is treated as a real business asset, not an afterthought. As a Data Engineer, you will help build and run a modern data platform that supports analytics, reporting, and business decision-making across the organization. You will work on a mix of cloud-based and on-prem solutions, focusing on reliable data pipelines, data quality, and scalable data processing. What's in it for you: - Exciting role at a leading European bank - 6 months assignment, B2B contract - Fully remote role with availability within Central European Time (CET) - Working with cutting-edge IT technologies - Personal growth and development opportunities Wymagania: - 2 to 4 years of hands-on experience as a Data Engineer in cloud environments - Strong SQL skills - Practical experience with Python in data engineering use cases - Experience with Azure Data Factory and Databricks - Good understanding of data pipelines, ETL/ELT and data lifecycle - Ability to communicate clearly with both technical and non-technical stakeholders - Experience working in Agile teams - Cost awareness when designing data solutions - Curiosity and willingness to learn continuously Tech stack you will use: - SQL for data processing and analytics - Python for transformations and data quality logic - Azure Data Factory for ingestion (1-to-1 pipelines) - Databricks for transformations and data quality implementation - Azure-based data platform services - Git or similar source code management tools - CLI environments (bash, PowerShell) Codzienne zadania: - Building and maintaining end-to-end data pipelines - Ingesting data from multiple sources (structured, semi-structured, unstructured) - Transforming raw data into clean, usable data sets - Implementing data quality rules and validation logic - Serving data to business users, analysts, and internal applications - Managing storage as a core foundation of the data platform - Working closely with technical and non-technical stakeholders - Operating in Agile, DataOps and DevOps environments
Analytics Engineer (SQL / DWH / dbt)
Square One Resources
⚲ Warsaw
21 840 - 26 880 PLN (B2B)
Wymagania
- SQL
- Kimball
- Data Vault
- dbt (nice to have)
- BigQuery (nice to have)
- Tableau (nice to have)
O projekcie: Szukamy doświadczonego Analytics Engineera, który będzie łączył świat danych, biznesu i technologii. To rola dla osoby, która nie tylko buduje rozwiązania, ale też rozumie ich kontekst biznesowy i potrafi przełożyć potrzeby organizacji na konkretne modele danych i pipeline’y. Wymagania: - min. 5 lat doświadczenia w obszarze danych (Analytics Engineer / Data Engineer / Data Analyst) - bardzo dobra znajomość SQL - doświadczenie z hurtowniami danych (DWH) - doświadczenie w modelowaniu danych (Kimball lub Data Vault) - umiejętność pracy ze stakeholderami biznesowymi - bardzo dobra znajomość języka angielskiego Mile widziane - doświadczenie z dbt - znajomość Google BigQuery - doświadczenie w pracy z Tableau - Python (okazjonalnie) - doświadczenie w środowisku międzynarodowym Codzienne zadania: - integracja danych z wielu systemów i krajów - projektowanie i rozwój modeli danych (Kimball / Data Vault) - budowa i utrzymanie pipeline’ów danych - współpraca z biznesem i IT przy definiowaniu potrzeb analitycznych - udział w transformacji do przyszłościowej architektury danych
Ab Initio Data Engineer
DAC.digital
⚲ Gdańsk
21 000 - 26 000 PLN netto (B2B)
Wymagania
- Python
- SQL
- Ab Initio
At DAC.digital, we are constantly growing our business. As part of our growth strategy, we are starting a strategic partnership with one of the biggest consulting companies in Germany with a strong focus on Data Management. Their clients include major banking, insurance, retail, telecommunications, and automotive brands. Key information: Salary: 21 000 - 26 000 PLN net/month - pure B2B contract18 500 –22 500 PLN net/month – B2B contract (days off included) It is vital that you have: • strong experience working with Ab Initio and designing ETL processes; • knowledge of relational databases and SQL; • strong analytical thinking and problem-solving skills in data processing; • experience in large-scale data projects or corporate environments; • knowledge of English (min. B2); • working in agile methodologies (Scrum, Kanban); • high communication skills; • eager to learn and share knowledge. Nice to have technologies: • Cloud Platforms: Ideally GCP, alternatively Azure • Managed Data Services: BigQuery, alternatively Snowflake • Data Processing & Orchestration: e.g., Spark, Airflow Technology stack: • Python and SQL • Ab Initio • GCP/Snowflake Data Platform You will be responsible for supporting our team in: • designing, developing, and migrating ETL/ELT processes using Ab Initio within a corporate data environment; • creating, optimizing, and monitoring complex data flows, as well as integrating data from various sources (relational databases, files, external systems); • collaborating with data analysts, architects, and other teams to understand business requirements and translate them into technical data solutions; • designing and implementing data quality validation procedures to ensure consistency and accuracy of data in target systems. What do we offer: • possibility to work 100% remotely or on-site at our office in Gdańsk; • b2b contract with included 30 days off; • private Medical care; • group insurance; • pre-paid card or Sport Card; • referral program; • real Agile practices; • employee well-being online platform; • cafeteria benefits.
4 x Mid-level Databricks Developers/Analyst
emagine Polska
⚲ Copenhagen
Wymagania
- GitHub
- data processing
- Microsoft Platform
- Stakeholder Management
- business requirements
Overview: Our organization is seeking 4 skilled Databricks Developers/Analysts to join our clients team. Ideal candidates will possess Databricks experience, a background in computer science, and an understanding of GitHub. This role will focus on productionizing data quality controls within our Databricks environment to ensure a comprehensive overview of quality metrics. Main Responsibilities: • Productionize quality controls using the Databricks platform. • Develop and implement data processing and analytics strategies. • Map data attributes to business requirements. • Create a v1.0 management report setup that communicates quality metrics. • Collaborate with stakeholders to operationalize issue remediation. Key Requirements: • Expertise in Databricks for efficient data processing and analytics. • Experience with Great Expectations, GitHub, and pipelines for production. • Strong understanding of data models and schemas. • Ability to analyze and interpret data for operational improvement. • Minimum of 4 years of relevant experience. Start + Duration: 1st of April. Contract duration will be 3 secured months, with the potential (probable) extension to 6 months total. Workload: Full-time Location: Consultant shall be able to work onsite in Greater Copenhagen min. 4 days per week and 1 day remote.
Stażysta / Stażystka w Zespole Hurtowni Danych i Systemów Raportowych
Allianz
⚲ Warszawa, Śródmieście
Wymagania
- Python
- SQL
- PostgreSQL
- MySQL
- Java
- Spring Boot
- Hibernate
- AWS
Nasze wymagania: Doświadczenie w tworzeniu skalowalnych aplikacji Znajomość Python (pandas, PySpark), SQL (PostgreSQL, MySQL) i Java (Spring Boot, Hibernate). Doświadczenie w pracy z narzędziami chmurowymi (AWS, Google Cloud) oraz zaawansowanymi funkcjami Excela (Power Query, Dashboardy) do optymalizacji procesów i realizacji złożonych projektów Zakres obowiązków: Rozwój i wdrażanie nowych wersji aplikacji na środowiska Preprod i Prod w ramach strumienia Hurtowni Danych Prowadzenie dokumentacji technicznej Aktywny udział w pracach Zespołu Analiza logów systemowych i błędów oprogramowania Czynności polegające na pracach związanych z rozwojem Hurtowni Danych
Software Engineer with Data
Schibsted % Vend Polska
⚲ Kraków, Gdańsk
14 200 - 21 500 PLN (PERMANENT)
Wymagania
- Python
- Data engineering
- AI
- AWS
- Cloud platform
- Continuous integration
- Cloud
- Testing
- Spark
- Snowflake (nice to have)
- Kubernetes (nice to have)
- Docker (nice to have)
- Airflow (nice to have)
O projekcie: ABOUT THE COMPANY Schibsted & Vend Polska is a joint venture between two leading Nordic entities - a major media house, Schibsted, and a dynamic marketplace leader, Vend. With over 300 software engineers in Kraków and Gdańsk, we provide high-quality technological solutions that support the operations and innovations of both parent companies. ABOUT THE TEAM The Ads Insights & Targeting team, part of Schibsted’s Ad Capabilities unit, builds and maintains critical systems that empower advertisers through data-driven insights and advanced targeting capabilities. We create solutions that make advertising more efficient, transparent, and privacy-safe - helping fund quality journalism across Schibsted’s brands. We’re a cross-functional team of around 12 people - including an Engineering Manager, a Product Manager, and 10 Software Engineers. We work closely together across the whole product lifecycle - from idea to production - combining product thinking, data expertise, and solid engineering craft. Our culture is built on collaboration, ownership, and curiosity: we review each other’s code, run internal workshops, and continuously look for ways to make our systems and ourselves better. We are also actively exploring agentic engineering and integrating AI tools into our day-to-day development workflow, experimenting with new ways of building software faster, smarter, and with greater impact. We’re responsible for: - Clean Rooms - secure data environments that allow advertisers to understand whether their campaigns drive real results (e.g. if a user who saw an ad later bought their product) - without sharing or exposing any personal data. - Reporting - delivering accurate, real-time performance insights. - Audience Targeting Engine - helping advertisers reach the right people, at the right time, across Schibsted brands. - Reach Algorithm - our next-generation solution for measuring campaign reach in a world without third-party cookies. ABOUT THE ROLE As a part of Ads Insights & Targeting team, your responsibilities will include: - Designing, building, and maintaining large-scale data systems processing billions of events daily. - Collaborating with other engineers and product managers to turn business needs into scalable, production-ready solutions. - Ensuring reliability, performance, and high quality through automated testing, monitoring, and continuous integration across our infrastructure. - Developing and optimising backend services in Python and Scala using modern data and cloud technologies. - Contributing to code reviews, architecture discussions, and knowledge-sharing sessions (e.g. workshops, demos, or pair programming). - Shaping the future of our data platform. We work in a hybrid way and would like to catch up with you at our vibrant and colourful office in Kraków 2 days per week. This allows for collaboration and meaningful connections with the team. If you’re not the working-from-home kind of person, you are of course welcome in our office every day! Wymagania: ABOUT YOU We’re looking for a Software Engineer with strong programming skills who wants to work with large datasets and collaborate effectively with the team. If you check most of these boxes, we’d love to hear from you: - 3+ years of experience as software engineer, working with modern programming languages and willingness to code in Python/Scala - Experience with data engineering - Experience with cloud platforms (preferrably AWS) or willingness to learn it - Strong problem-solving skills - you take initiative and think critically - Ownership mindset - you take responsibility for your work and align with teams & stakeholders - Familiarity with best testing practices (unit, integration, end-to-end) - Great communicator - you value open discussions, give and receive feedback, and enjoy teamwork - Bonus points: Familiarity with Python, Scala, Snowflake, Kubernetes, Docker, Airflow, Spark. - Polish language is required
Senior Integration Architect - Product Supply
Bayer
⚲ Warszawa
20 240 - 25 300 PLN (PERMANENT)
Wymagania
- AWS
O projekcie: The Data, Analytics, and AI Organization of the Digital Transformation Unit at Bayer Consumer Health operates a global transactional data integration backbone. This platform is designed to connect internal and external applications and systems, providing a uniform experience for customers, consumers, and employees. The backbone offers composable infrastructure modules, development frameworks, CI/CD automations, and developer tooling to software engineering teams that build integrations on top of the platform. As a Senior Integration Architect, you will be responsible for planning and designing integration processes while leading a team of engineers to implement those processes. A key activity will be aligning with Squad Leads in your Unit regarding new interfaces. Wymagania: - Educational Background: Bachelor’s or Master’s degree in Computer Science, Information Technology, Data Engineering, Data Science, or a related field. Significant relevant experience may also be considered. - Compliance Regulations: Experience with building IT system in a GxP environment - Project Management Experience: Demonstrated experience in managing IT projects, Agile Project Management techniques, coordinating with international partners, and understanding the structure of large organizations, estimating costs and timelines. - Integration and Automation Expertise: Knowledge of CI/CD processes and experience in building and deploying integration solutions. - Problem-Solving Skills: Strong analytical skills with the ability to troubleshoot and resolve technical issues effectively. - Communication Skills: Excellent verbal and written communication skills, with the ability to present complex technical information to non-technical stakeholders. - Intercultural Competence: Ability to work collaboratively in a multicultural environment, respecting diverse perspectives and fostering teamwork. - Language Proficiency: Fluent in English, both spoken and written. German is a plus. - Technical Skills: - Proficiency in IT Architecture & design: specifically in system integration, designing distributed systems and API design - Experience with NodeJS and AWS services (SQS, CloudWatch, Lambda, DynamoDB, API Gateway, IAM, Networking Service, Document DB). - Familiarity with IaaS (Terraform) - Azure DevOps, GitHub Repos, and GitHub Actions. - Linux command line tools and shell scripting Codzienne zadania: - Integration Planning and Design: Lead the planning and design of integration processes & interfaces to ensure effective connectivity between various systems and applications, predominantly in Product Supply area - Team Leadership: Manage and mentor a team of engineers, providing guidance and support in the implementation of integration solutions. - Collaboration with Stakeholders: Work closely with Squad Leads and other stakeholders to understand integration requirements and align integration strategies with business goals. - Technical Oversight: Ensure that integration solutions are scalable, reliable, and maintainable, adhering to best practices in IT architecture. - Documentation and Standards: Create, maintain & review comprehensive documentation for integration processes, standards, and best practices. - Intercultural Communication: Foster an environment of open communication and collaboration among diverse teams across different geographical locations.
Senior Integration Architect - Manufactoring
Bayer
⚲ Warszawa
20 240 - 25 300 PLN (PERMANENT)
Wymagania
- Node.js
O projekcie: The Data, Analytics, and AI Organization of the Digital Transformation Unit at Bayer Consumer Health operates a global transactional data integration backbone. This platform is designed to connect internal and external applications and systems, providing a uniform experience for customers, consumers, and employees. The backbone offers composable infrastructure modules, development frameworks, CI/CD automations, and developer tooling to software engineering teams that build integrations on top of the platform. As a Senior Integration Architect, you will be responsible for planning and designing integration processes while leading a team of engineers to implement those processes. A key activity will be aligning with Squad Leads in your Unit regarding new interfaces. Wymagania: - Educational Background: Bachelor’s or Master’s degree in Computer Science, Information Technology, Data Engineering, Data Science, or a related field. Significant relevant experience may also be considered. - Compliance Regulations: Experience with building IT system in a GxP environment - Project Management Experience: Demonstrated experience in managing IT projects, Agile Project Management techniques, coordinating with international partners, and understanding the structure of large organizations, estimating costs and timelines. - Integration and Automation Expertise: Knowledge of CI/CD processes and experience in building and deploying integration solutions. - Problem-Solving Skills: Strong analytical skills with the ability to troubleshoot and resolve technical issues effectively. - Communication Skills: Excellent verbal and written communication skills, with the ability to present complex technical information to non-technical stakeholders. - Intercultural Competence: Ability to work collaboratively in a multicultural environment, respecting diverse perspectives and fostering teamwork. - Language Proficiency: Fluent in English, both spoken and written. German is a plus. - Technical Skills: - Proficiency in IT Architecture & design: specifically in system integration, designing distributed systems and API design - Experience with NodeJS and AWS services (SQS, CloudWatch, Lambda, DynamoDB, API Gateway, IAM, Networking Service, Document DB). - Familiarity with IaaS (Terraform) - Azure DevOps, GitHub Repos, and GitHub Actions. - Linux command line tools and shell scripting Codzienne zadania: - Integration Planning and Design: Lead the planning and design of integration processes & interfaces to ensure effective connectivity between various systems and applications, predominantly in Product Supply area - Team Leadership: Manage and mentor a team of engineers, providing guidance and support in the implementation of integration solutions. - Collaboration with Stakeholders: Work closely with Squad Leads and other stakeholders to understand integration requirements and align integration strategies with business goals. - Technical Oversight: Ensure that integration solutions are scalable, reliable, and maintainable, adhering to best practices in IT architecture. - Documentation and Standards: Create, maintain & review comprehensive documentation for integration processes, standards, and best practices. - Intercultural Communication: Foster an environment of open communication and collaboration among diverse teams across different geographical locations
Senior Data Scientist (Ad-Tech / CTV / DSP)
Edge One Solutions Sp. z o.o.
⚲ Warszawa
Wymagania
- Python
- SQL
Nasze wymagania: Minimum 7+ lat doświadczenia jako Data Scientist (silny fokus na ad-tech) Doświadczenie w DSP / programmatic / CTV measurement Praktyka w bidder optimization i modelach attribution Bardzo dobra znajomość statystyki, ML, Python, SQL Wykształcenie wyższe (preferowane MSc / PhD – CS, statystyka, ekonomia lub pokrewne) Mile widziane: GCP / Vertex AI, feature stores, experiment tracking Doświadczenie z LLM / agentic AI (szczególnie ewaluacja) Umiejętność komunikacji z biznesem Podejście researchowe i innowacyjne O projekcie: Nasz klient dostarcza rozwiązania AI, które maksymalizują wpływ biznesowy i minimalizują ryzyko dla organizacji. Platforma i aplikacje integrują się z kluczowymi procesami biznesowymi, umożliwiając zespołom rozwijanie, wdrażanie oraz zarządzanie AI na dużą skalę. O roli Poszukujemy doświadczonego Senior Data Scientist, który dołączy do projektu w obszarze nowoczesnej platformy reklamowej CTV i wesprze rozwój rozwiązań z zakresu attribution, bidder optimization oraz audience intelligence. Zakres obowiązków: Projektowanie i wdrażanie modeli ML dla bidder optimization, pricing, budget allocation i pacing Budowa systemów orkiestracji modeli przy wielu KPI biznesowych Rozwój modeli attribution i measurement (CTV) Praca nad audience segmentation (first & third-party data) Analiza efektywności kampanii i rekomendacje produktowe Współpraca z Product & Engineering przy wdrożeniach produkcyjnych Oferujemy: Indywidualne wsparcie Service Delivery Managera, który pomoże Ci zaplanować ścieżkę kariery i zadba o komfort w projekcie Szkolenia, certyfikaty i konferencje - dofinansowujemy lub w pełni pokrywamy koszty rozwoju kompetencji technicznych SmartChange – umożliwiamy zmianę projektu i dobieramy kolejny zgodnie z Twoimi preferencjami Nie tylko praca – wyjścia, sport i nasze webinary #edge1talks Aktywność fizyczna – wspieramy sportowe inicjatywy i wynajem sal treningowych Pakiet zdrowotny – prywatna opieka, karta sportowa, ubezpieczenie i wsparcie psychologiczne (HearMe) Elastyczne benefity – sam decydujesz, na co przeznaczasz punkty w naszym systemie benefitowym Bezpłatne lekcje języka angielskiego – dopasowane do Twojego poziomu, idealne do pracy w międzynarodowym środowisku
Senior Data Engineer | f/m/d
ERGO Technology & Services S.A.
⚲ Gdańsk, Warszawa, Śródmieście
Wymagania
- Docker
- Databricks
- Azure DevOps
Nasze wymagania: fluency in English good knowledge of Azure Databricks strong SQL skills knowledge of ETL processes/programming great communication skills, being an open-minded person who is not afraid of challenges at least five years of experience using PySpark and Python with API framework (FastAPI) monitoring & Automation: Dynatrace (or similar tools), System Automation knowledge of methodologies: Agile, Scrum, Kanban strong software engineering basics like dev tooling (IDE, Git, Docker), code quality mindset and testing data modeling for efficient interactive use-cases ability and willingness to quickly learn new (and old) technologies willingness to travel internationally Mile widziane: experience in Azure DevOps is a plus experience in Front-end development with JavaScript (Vue.js) and .NET Zakres obowiązków: designing and implementing data models and architectures that support financial data management developing and maintaining ETL processes to integrate data from diverse sources, including market data feeds, customer information systems, and transactional databases ensuring data quality, security, and compliance with regulatory requirements collaborating with business owners, data analysts and scientists, software engineers, and DevOps specialists to support the development of predictive models and business intelligence tools optimizing data processing workflows for performance, scalability, and reliability working in an international Scrum team Oferujemy: Let's be healthy – medical package, sports card, and numerous sports sections – these are some of the benefits that help our employees stay in good shape. Let's be balanced – work-life balance is a key aspect of a healthy workplace. We offer our employees flexible working hours, a confidential employee assistant program, as well as the possibility of remote working. However, staying at home with our in-office gaming room and dog-friendly office in Warsaw won’t be easy. Let's be smart – we organize numerous workshops and training courses. Thanks to hackathons and meetups, our specialists share their expertise with others. Additionally, we have a wide range of digital learning platforms and language courses. Let's be responsible – each year, we participate in several CSR activities, during which, together with our colleagues, we do our best to create a better future. Let's be fun – company-wide bike races and soccer matches, film marathons in our cinema room or other engaging team-building activities – we got it covered! Let's be diverse – every team member is valued, regardless of gender, nationality, religious beliefs, disability, age, and sexual orientation or identity. Your qualifications, experience, and mindset are our greatest benefit!
Senior Data Engineer Product Core Data Asset
Bayer
⚲ Warszawa
20 240 - 25 300 PLN (PERMANENT)
Wymagania
- Python
O projekcie: At Bayer we’re visionaries, driven to solve the world’s toughest challenges and striving for a world where ,Health for all, Hunger for none’ is no longer a dream, but a real possibility. We’re doing it with energy, curiosity and sheer dedication, always learning from unique perspectives of those around us, expanding our thinking, growing our capabilities and redefining ‘impossible’. There are so many reasons to join us. If you’re hungry to build a varied and meaningful career in a community of brilliant and diverse minds to make a real difference, there’s only one choice. The PS Data & Analytics team at Bayer Consumer Health focuses on driving digital transformation and innovation by creating best-in-class analytical solutions that enable data-driven decision making and performance optimization for Bayer Consumer Health’s Supply Chain organization. You will be part of the data & analytics organization and will be responsible for building data products in the area of product supply and supply chain. You partner with business stakeholders, data architects, data scientists, analytics leads as well as other engineers. You will build data pipelines, data models and provision data for Analytics products and data scientists. You will also make sure that proper development processes are followed within the team, enhance implementation frameworks and guide other team members in building scalable, secure and well performing data products. If you are interested in joining a young and dynamic team driving the digital transformation of Bayer Consumer Health, we would like to hear from you. Ingest data from transactional systems and data warehouses of other business functions into the divisional data analytics platform. Wymagania: - Implement data pipelines that integrate data from different sources (e.g. supply chain planning data, product hierarchy, quality data, inventory data, procurement data, distributor and transportation data) to develop globally harmonized data models and KPI calculations - Contribute to define data management and data quality standards. Ensure that data is well-managed to build stable, reusable and quality assured data assets. - Collaborate with other IT functions (enabling functions data asset teams, analytics teams, platform product managers & integration architects) to ensure the aforementioned activities are executed effectively. - Ensure that product supply data products adhere to the data protections standards. - Continuously enhance implementation frameworks based on the needs of the analytics products in your responsibility. - Guide other data engineers in your team (internal or external engineers) and ensure that all engineers apply same design principles and assure code quality. Break down bigger work packages into manageable tasks. - Together with the assigned data architect, ensure that cost and time estimations are accurate, quality of delivery is assured, and deliverables are properly tested, documented and handed over to the operations team Codzienne zadania: - Bachelor/Master’s degree in Computer Science, Engineering, or a related field. - 5+ years of working experience in the field of Data & Analytics, preferably in the area of product supply and the CPG industry - 3+ years of proficient coding experience with Python for data engineering, including SQL and PySpark (DataFrame API, Spark SQL, MLlib), with hands-on experience in various databases (SQL/NoSQL), key libraries (e.g., pandas, SQLAlchemy), parallel processing, and advanced data transformation and performance optimization techniques. - Excellent data engineering & technology knowledge (Azure Data Lake Gen2, Azure Synapse and Databricks as well as data management knowhow (data cataloguing, data quality management) - Solid understanding of data modeling, ETL processing and lakehouse concepts. - Knowledge of CI/CD processes and tools (GitHub, Azure DevOps Pipelines) - Profound data content knowledge (Supply Chain, Logistics, Quality Management) and Product Supply process knowhow - Experience in Agile methodologies (Scrum, Kanban) - Strong problem solving and analytical skills. Excellent interpersonal and communication skills, active listening, consulting, challenging, presentation skills. - Fluent in English, both written & spoken, intercultural awareness and willingness to travel -
Senior Data Engineer (Python / GenAI) | Kraków, hybrid
Antal
⚲ Kraków
35 280 - 43 680 PLN (B2B)
Wymagania
- Data engineering
- Python
- FastAPI
- Microservices
- API
- MongoDB
- PostgreSQL
- Hadoop
- BigQuery
- Cloud platform
- Azure
- GCP
- Unix
- Linux
- RAG (nice to have)
- Google ADK (nice to have)
- AI (nice to have)
- DevOps (nice to have)
- CD pipelines (nice to have)
- GenAI (nice to have)
O projekcie: Senior Data Engineer (Python / GenAI) 📍 Location: Kraków, Poland (hybrid: 2 days in the office, 3 days remote) 📄 B2B Contract: 210 – 260 PLN / hour We are looking for a proactive Senior Data Engineer (Python / GenAI) to join an international team building advanced data integration and retrieval solutions powering Generative AI across the organization. You will work closely with Engineers, Architects and AI Specialists to design, build and optimize data flows that directly support AI agents, RAG pipelines and large‑scale GenAI applications. This role sits within a collaborative engineering environment and offers the opportunity to influence architecture, development standards and the overall quality of next‑generation AI‑driven systems. Key Responsibilities: - Design and develop data integration components enabling GenAI models to access and process data from multiple internal systems. - Build and maintain data transfer pipelines ensuring scalability, reliability and security. - Develop Python‑based microservices (FastAPI) integrated with broader GenAI platforms. - Work with vector stores, memory stores and RAG pipelines to optimize data retrieval and context generation for AI agents. - Collaborate with cross‑functional teams to align data integrations with overall GenAI architecture. - Implement monitoring, observability and performance tuning for data services. - Participate in Agile ceremonies: sprint planning, reviews, retrospectives and daily stand‑ups. What We Offer: - Long‑term cooperation within a high‑impact, innovative AI programme. - Real influence on architecture, engineering standards and data integration strategy. - Work in a multicultural, distributed environment. - Private medical care (LuxMed) and Multisport card. - Stable B2B contract with competitive hourly rate. Wymagania: - 5+ years of experience in software or data engineering with strong focus on Python. - Hands‑on experience with FastAPI, microservices and API‑driven architectures. - Experience with databases and data platforms such as MongoDB, PostgreSQL, Hadoop, BigQuery, Starburst or similar. - Understanding of vector stores, embeddings or semantic search concepts. - Experience working with cloud platforms (GCP or Azure). - Solid understanding of Unix/Linux environments. - Strong debugging skills and ability to clearly communicate findings. Nice to Have: - Experience with RAG pipelines, context engineering or evaluation systems. - Knowledge of Google ADK or similar AI development toolkits. - Experience with memory stores or caching systems. - Exposure to GenAI platforms, LLM‑based solutions or AI agents. - Familiarity with DevOps practices and CI/CD pipelines.
Program Manager
Edge One Solutions Sp. z o.o.
⚲ Warszawa
Wymagania
- Microsoft Azure
- Google Cloud Platform
- AWS
- Jira
- Azure DevOps
- Confluence
- MS Project
Nasze wymagania: Ponad 8 lat doświadczenia w obszarze Program Management / Project Management, najlepiej w projektach związanych z data engineeringiem lub platformami danych Bardzo dobra znajomość zagadnień związanych z data pipelines, ETL/ELT, cloud data platforms (Azure/GCP/AWS), data modeling oraz workflowami inżynieryjnymi Doświadczenie we współpracy z klientami z Wielkiej Brytanii lub Europy w modelu nearshore lub hybrydowym Udokumentowane doświadczenie w zakresie customer onboardingu, zbierania wymagań oraz delivery governance Bardzo dobre umiejętności komunikacyjne, prezentacyjne oraz w zakresie stakeholder managementu Dobra znajomość narzędzi do zarządzania projektami, takich jak Jira, Azure DevOps, Confluence, MS Project Umiejętność przygotowywania i prezentowania przejrzystych raportów tygodniowych i miesięcznych, dashboardów oraz executive summaries Mile widziane: Certyfikaty takie jak PMP, PRINCE2, CSM lub certyfikacje chmurowe (Azure/GCP/AWS) Doświadczenie w środowisku Agile / Scrum Znajomość obszaru data products, analytics platforms lub programów migracyjnych O projekcie: Nasz klient dostarcza rozwiązania AI, które maksymalizują wpływ biznesowy i minimalizują ryzyko dla organizacji. Platforma i aplikacje integrują się z kluczowymi procesami biznesowymi, umożliwiając zespołom rozwijanie, wdrażanie oraz zarządzanie AI na dużą skalę. O roli Poszukujemy doświadczonej osoby na stanowisko Program Managera z mocnym backgroundem w obszarze Data Engineering, która będzie odpowiedzialna za prowadzenie onboardingu klientów, zarządzanie kompleksową realizacją programów oraz zapewnienie skutecznego governance’u w wielu inicjatywach związanych z danymi. Szukamy osoby, która łączy praktyczne rozumienie zagadnień data engineeringowych z wysokimi kompetencjami w zakresie zarządzania interesariuszami, raportowania projektowego oraz nadzoru nad realizacją działań end-to-end. To rola w modelu nearshore, realizowana we ścisłej współpracy z zespołami klienta z Wielkiej Brytanii. Zakres obowiązków: Prowadzenie złożonych programów z obszaru Data Engineering obejmujących wiele strumieni działań Zarządzanie procesem onboardingu klientów, w tym planowaniem, zbieraniem wymagań, koordynacją oraz dopasowaniem technicznym Nadzór nad end-to-end delivery governance, w tym nad kamieniami milowymi, ryzykami, zależnościami oraz change requestami Tworzenie i utrzymywanie dashboardów programowych, raportów tygodniowych oraz aktualizacji dla kadry zarządzającej Dbanie o terminową realizację projektów, jakość dostarczanych rozwiązań oraz spełnianie ustalonych SLA Pełnienie roli głównego punktu kontaktu dla interesariuszy po stronie klienta z UK Prowadzenie regularnych spotkań statusowych, steering committees oraz przeglądów programowych Przekładanie potrzeb biznesowych na konkretne plany działania dla zespołów inżynieryjnych Identyfikowanie ryzyk projektowych oraz wdrażanie działań minimalizujących ich wpływ Wspieranie zespołów w obszarze architektury, pipeline’ów, integracji oraz jakości danych dzięki znajomości Data Engineering Weryfikacja rozwiązań technicznych na etapie onboardingu i realizacji programu Współpraca z liderami technicznymi w zakresie wykonalności rozwiązań, estymacji oraz zgodności z roadmapą Koordynowanie pracy zespołów cross-funkcyjnych w modelu nearshore/offshore Wspieranie planowania zasobów, onboardingu nowych członków zespołu oraz monitorowania produktywności Budowanie kultury transparentności, odpowiedzialności i ciągłego doskonalenia Oferujemy: Indywidualne wsparcie Service Delivery Managera, który pomoże Ci zaplanować ścieżkę kariery i zadba o komfort w projekcie Szkolenia, certyfikaty i konferencje - dofinansowujemy lub w pełni pokrywamy koszty rozwoju kompetencji technicznych SmartChange – umożliwiamy zmianę projektu i dobieramy kolejny zgodnie z Twoimi preferencjami Nie tylko praca – wyjścia, sport i nasze webinary #edge1talks Aktywność fizyczna – wspieramy sportowe inicjatywy i wynajem sal treningowych Pakiet zdrowotny – prywatna opieka, karta sportowa, ubezpieczenie i wsparcie psychologiczne (HearMe) Elastyczne benefity – sam decydujesz, na co przeznaczasz punkty w naszym systemie benefitowym Bezpłatne lekcje języka angielskiego – dopasowane do Twojego poziomu, idealne do pracy w międzynarodowym środowisku
Power Apps Developer / Power Platform Automation Specialist
U.S.Bank | Elavon
⚲ Warszawa, Mokotów
Wymagania
- Microsoft Power Platform
- Power Apps
- Power Automate
- Dataverse
- SQL Server
- Power Fx
- PowerShell
- Python
- Azure Logic Apps
- Azure Functions
Nasze wymagania: 2+ years of experience with Microsoft Power Platform (Power Apps, Power Automate) Strong knowledge of Dataverse, SQL Server, and data modeling Experience with Power Automate for building complex workflows and integrations Familiarity with Common Data Service (CDS) and Microsoft 365 ecosystem Ability to write and debug Power Fx formulas and expressions Understanding of security roles, permissions, and environment management in Power Platform Excellent problem-solving and communication skills Mile widziane: Experience with PowerShell scripting for automation and administration Experience with Python for data processing or integration tasks Familiarity with Windows Task Scheduler or cron jobs for scheduled automation Knowledge of Azure Logic Apps, Azure Functions, or custom connectors Microsoft certifications (e.g., PL-100, PL-400) are a plus O projekcie: Job Description The Power Apps Developer / Power Platform Automation Specialist will design, develop, and maintain custom business applications and automations using Microsoft Power Platform and a small handful of other tools. This role requires hands-on experience building Power Apps, integrating with data sources such as SQL Server, Dataverse, and other databases, and automating workflows using Power Automate. The ideal candidate is proactive, detail-oriented, and comfortable working across teams to deliver scalable and secure solutions. Bonus qualifications include experience with PowerShell, Python, and task scheduling tools like Windows Task Scheduler or cron. Zakres obowiązków: Design and develop Canvas and Model-driven Power Apps tailored to business needs Build and maintain Power Automate flows to automate data retrieval, transformation, and population across systems Integrate Power Apps with SQL Server, Dataverse, SharePoint, and other data sources Create custom connectors and use APIs to extend Power Platform capabilities Collaborate with business stakeholders to gather requirements and translate them into technical solutions Ensure solutions are scalable, secure, and aligned with best practices Troubleshoot and optimize existing Power Platform solutions Document solutions, workflows, and architecture for maintainability Oferujemy: Working hours: 2:00 PM–10:00 PM (training period), then 11:00 AM–7:00 PM.
Młodszy Programista - Cobol + Java + Spring (sektor bankowości) (m/k/os.)
Asseco Poland S.A.
⚲ Warszawa, Wilanów
Wymagania
- COBOL
- Java
- SQL
- C
- C++
- Python
- Jakarta EE
- Linux
- XML
Nasze wymagania: Dostępność w wymiarze min. 4/5 etatu - warunek konieczny Wykształcenie wyższe o profilu technicznym lub ostatni rok studiów Motywacja do rozwoju w kierunku języka Cobol Dobra umiejętność programowania w języku strukturalnym, preferowana Java Znajomość języka SQL i relacyjnych baz danych Znajomość języka angielskiego w stopniu pozwalającym na swobodne czytanie dokumentacji technicznej Bardzo dobra organizacja pracy własnej i dokładność w działaniu Analityczne myślenie i dociekliwość w rozwiązywaniu problemów Mile widziane: Znajomość Java, C/C++ lub Python Znajomość Jakarta EE, Spring Znajomość systemu Linux i poleceń Linux Znajomość XML/XSD, Json Znajomość Webservices, REST API Pierwsze doświadczenie w pracy w obszarze bankowości O projekcie: Dołącz do zespołu zajmującego się serwisem i rozwojem systemu rozliczeń walutowych i złotówkowych, realizowanych dla naszego klienta z obszaru bankowości. Praca odbywa się w modelu hybrydowym – 3 dni w tygodniu (poniedziałek, wtorek i środa) w biurze w Warszawie (ul. Adama Branickiego 13, Wilanów). Zakres obowiązków: Programowanie rozwojowym bankowych systemów rozliczeniowych i uzgodnieniowych Serwis oprogramowania i naprawą defektów Analiza i pisanie założeń do drobnych modyfikacji aplikacji Przygotowywanie i realizacja testów jednostkowych Opracowywanie dokumentacji technicznej
Kierownik / Kierowniczka Zespołu Zarządzania Danymi
Bank Pekao
⚲ Warszawa, Wola
O projekcie: W Centrum Zarządzania Danymi i Informacją Banku Pekao S.A. realizujemy strategiczne projekty transformacyjne, które zmieniają sposób działania banku w kierunku organizacji data-driven. Dołączając do naszego zespołu będziesz mieć bezpośredni wpływ na architekturę, narzędzia i procesy związane z rozwojem oraz utrzymaniem systemów analityki biznesowej klasy DataLake, DataWarehouse, BI, MIS, CRM, a także codzienną pracę zespołu łączącego wiedzę biznesową, procesową i techniczną z kompetencjami analitycznymi oraz doświadczeniem pracy w formule portfela projektów/programu. Będziesz działać projektowo, rozwijając przy tym umiejętności zarządzania zmianą, projektowania rozwiązań i współpracy między jednostkami w banku. Zakres obowiązków: Leadership zespołu analityków jakości danych, odpowiedzialnych za monitoring jakości danych oraz realizację procesów ich poprawy. Zarządzanie zespołem: wyznaczanie celów, motywowanie, rozliczanie wyników oraz rozwój kompetencji podległych pracowników. Odpowiedzialność za rozwój systemu monitoringu i raportowania jakości danych w ramach Nowego Ekosystemu Danych / Nowej Platformy Danych. Zapewnienie sprawnego i efektywnego funkcjonowania procesu poprawy jakości danych w skali całego Banku. Realizacja inicjatyw programu budującego organizację DATA‑DRIVEN, ze szczególnym uwzględnieniem: poprawy backlogu błędów jakości danych, rozwoju i doskonalenia monitoringu jakości danych. Pełnienie roli lidera w obszarze jakości danych w Banku, w tym: inicjowanie działań usprawniających, koordynacja współpracy interesariuszy, nadawanie kierunku strategicznego dla obszaru Data Quality. Przygotowywanie materiałów i rekomendacji dotyczących jakości danych na Komitet ds. Zarządzania Danymi i Jakością Danych oraz inne fora decyzyjne w ramach programu DATA‑DRIVEN. Aktywne budowanie i wspieranie społeczności wokół Nowej Platformy Danych w modelu Hub & Spoke. Odpowiedzialność za jakość i terminowość realizacji projektów prowadzonych przez podległy zespół. Bieżąca współpraca i efektywna komunikacja z innymi zespołami i jednostkami w Banku. Oferujemy: Udział w nowatorskich i zróżnicowanych projektach o skali spotykanej wyłącznie w największych firmach. Atrakcyjne wynagrodzenie oraz premie uwzględniające indywidualne wyniki i zaangażowanie. Prywatną opiekę medyczną dla Ciebie i Twojej rodziny na preferencyjnych warunkach. Kartę MultiSport i Ubezpieczenie Grupowe na korzystnych warunkach. System szkoleń i programów rozwojowych, w tym dostęp do LinkedIn Learning. Udział w wyjątkowych inicjatywach realizowanych w banku (wolontariat pracowniczy, letnie i zimowe mistrzostwa sportowe banku, akcje ekologiczne i prozdrowotne).
Informatica PowerCenter Developer
Scalo Sp. z o.o.
⚲ Warszawa
120 zł netto (+ VAT) / godz.
Wymagania
- Informatica PowerCenter
- Oracle
- SQL
- PL/SQL
Nasze wymagania: masz minimum 3 lata doświadczenia w projektowaniu, implementacji i wdrażaniu rozwiązań ETL w oparciu o Informatica PowerCenter, posiadasz minimum 3 lata doświadczenia w projektowaniu, implementacji i wdrażaniu systemów informacyjnych w środowisku Oracle, dobrze znasz Informatica PowerCenter w zakresie budowy procesów ETL, dobrze znasz RDBMS Oracle 9i / 10g w obszarze projektowania baz danych i aplikacji, bardzo dobrze znasz SQL oraz posiadasz znajomość PL/SQL, masz praktyczne doświadczenie w projektowaniu i programowaniu hurtowni danych oraz systemów informacyjnych, pracowałeś/aś przy rozwiązaniach dla instytucji finansowych, telekomunikacyjnych lub organizacji o podobnej skali i złożoności, znasz zagadnienia inżynierii systemów informatycznych oraz metodyki ich wytwarzania. O projekcie: W Scalo zajmujemy się dostarczaniem projektów software'owych i wspieraniem naszych partnerów w rozwijaniu ich biznesu. Tworzymy oprogramowanie, które umożliwia ludziom dokonywanie zmian, działanie w szybszym tempie oraz osiąganie lepszych rezultatów. Jesteśmy firmą, która wykorzystuje szerokie spektrum usług IT, żeby pomagać klientom. Obszary naszego działania to m.in.: doradztwo technologiczne, tworzenie oprogramowania, systemy wbudowane, rozwiązania chmurowe, zarządzanie danymi, dedykowane zespoły developerskie. Cześć! U nas znajdziesz to, czego szukasz - przekonaj się! Zakres obowiązków: prowadzenie prac projektowych i implementacyjnych w zakresie rozwoju hurtowni danych oraz systemów informacyjnych, projektowanie algorytmów oraz implementacja procesów ETL, udział we wdrożeniu narzędzia Informatica PowerCenter, projektowanie baz danych oraz martów raportowo-analitycznych, projektowanie, implementacja i rozwój środowisk raportowo-analitycznych w oparciu o rozwiązania klasy BI, testowanie rozwiązań informatycznych oraz kontrola jakości prac wykonanych przez innych, opracowywanie specyfikacji technicznej oraz dokumentacji projektowej, technicznej i administracyjnej, przygotowywanie pakietów instalacyjnych oprogramowania, przygotowywanie odbiorów oprogramowania i udział w przekazywaniu systemów do eksploatacji, praca przy rozwiązaniach dla organizacji o dużej skali i złożoności procesów, praca hybrydowa – wizyty w biurze w Warszawie 2 razy w miesiącu, stawka do 120 zł/h + VAT (B2B). Oferujemy: stabilną współpracę i różnorodne projekty (Software, Embedded, Data, Cloud Services), możliwość rozwoju kompetencji jako Ambasador Scalo - doskonal skille techniczne i presalesowe w społeczności naszych ekspertów, wpływaj na rozwój organizacji, wspieraj innych, przestrzeń do budowania marki osobistej - twórz wartościowe treści, dziel się wiedzą i doświadczeniem, występuj jako prelegent na wydarzeniach (np. ProMeet), prywatną opiekę medyczną - Luxmed, kafeteryjny system benefitów - Motivizer, dostęp do karty Multisport, program poleceń pracowniczych z nagrodami, integracje i wydarzenia firmowe.
Graph Database Engineer & Neo4j
Green Minds Sp.zo.o.
⚲ Warsaw
31 920 - 38 640 PLN (B2B)
Wymagania
- Java
- MongoDB
- Neo4j
- Azure
- CI/CD
- ML
- GraphQL
- Data engineering (nice to have)
O projekcie: About the project We’re forming a new delivery team in Poland to take over and continue the development of a web platform built on a graph database (Neo4j). The goal is to ensure a smooth knowledge transfer, establish stable technical processes, and keep developing new features to make the system scalable and future-ready. You’ll work with experienced engineers from different domains — backend, frontend, DevOps, QA, and data science — in a modern, collaborative setup. Wymagania: What we expect - 5+ years of professional experience in software development - 2+ years working directly with Neo4j and Cypher - Strong skills in Java and solid understanding of graph theory - Experience with distributed systems and cloud platforms (AWS / Azure / GCP) - Familiarity with CI/CD and automation practices - Higher education in Computer Science, Mathematics, Physics, or a similar field Nice to have: - Academic or research experience (lecturer, professor, researcher) - MSc or PhD degree in a technical discipline - Experience with knowledge graphs, semantic modeling, or ML integrations - Certifications like Neo4j Certified Professional or AWS Data Engineer Codzienne zadania: - As a Graph Database Engineer, you’ll be the key person responsible for maintaining and evolving the graph data architecture behind the platform. You will: - Design and optimize graph data models in Neo4j - Write and tune Cypher queries and data pipelines - Develop custom plugins and API integrations in Java - Improve performance, indexing, and schema design - Collaborate with data scientists on graph analytics and recommendation engines - Document architecture and support the onboarding of new engineers
Data Software Engineering
EPAM Systems (Poland) sp. z o.o.
⚲ Kraków, Grzegórzki
Wymagania
- Python
- Java
- SQL
- Docker
- Kubernetes
Nasze wymagania: Citizens of Poland and permanent residents who are eligible to work in this country Second-to-last or final year university students and graduates Individuals aged 18 years and older English level from B2 (Upper-Intermediate) and higher Hands-on experience with Python or Java languages with a good theoretical background Solid knowledge of Relational Database Management System (RDBMS) theory Proficiency in Structured Query Language (SQL), including DML, DDL, DCL and TCL Strong theoretical foundation in Data Warehousing (DWH) and Data Lakes Good grasp of Linux, as well as containerization and orchestration technologies such as Docker and Kubernetes Mile widziane: Degree from a technical university or other educational institution with a technical specialization O projekcie: Data Software Engineering is an IT area that deals with building and supporting the infrastructure and workflows necessary to collect, store, process and transform big data into an analysis-ready format. The program is divided into two progressive stages, allowing you to gradually advance to more profound topics: Stage 1: Fundamentals (~3 months, ~10-12 hrs/week) You will explore self-study materials at your convenience and complete assigned tasks with immediate automated feedback. To move forward, you will need to show good results and successfully pass four parts of the final test and technical interview. We will assess your knowledge of Git, Python, DevOps, CI/CD, DB, SQL, DWH and ETL. Stage 2: Specialization (~3 months, ~20 hrs/week) We will offer you a mentoring program with one-on-one meetings with production experts. Your learning will become more intensive and practice-driven, with a focus on advanced concepts and tools. After completing the training, the best candidates will have the opportunity to interview for a Junior position and join the EPAM team in Poland, based on demonstrated skills and available opportunities in the company. Zakres obowiązków: If you have one year of hands-on experience with Python or Java and are striving to step into the world of Data Software Engineering (Big Data), this program is for you. Our learning path consists of two stages, offering you a step-by-step approach to gaining in-depth knowledge. Apply today to have the opportunity to: • Gain expertise in modern software development technologies • Dive into the capabilities of the leading public cloud platforms • Explore the design, development and maintenance of big data software systems • Learn large-scale data handling tools (Apache Spark, Kafka, Databricks, etc.) After successfully completing all program stages, you will gain market-oriented skills that you may further apply at EPAM or elsewhere in the IT industry. Oferujemy: Industry-focused education with EPAM, a leading software engineering company Intensive, hands-on learning through numerous practical assignments Free access to top-notch educational materials and resources Regular guidance from seasoned mentors throughout your advanced training Deep dive into your specialization and emerging technologies like GenAI and Cloud Opportunity to join the EPAM team upon successful completion of all stages
Data Scientist
Smart Topaz sp. z o.o.
⚲ Warszawa
15 000 - 20 000 PLN (PERMANENT)
Wymagania
- Python
- SQL
- Databricks
- BigQuery (nice to have)
O projekcie: Smart Topaz is working with an international company operating in the digital and data-driven retail space. The organization is currently strengthening its analytical capabilities to support further development of online and customer-focused initiatives. We are looking for a Senior Data Scientist who will contribute to advanced analytics and machine learning use cases, working closely with business and technology stakeholders. Wymagania: - Experience in data science applied to business problems - Strong analytical mindset with hands-on Python and SQL skills - Ability to work independently and structure complex topics - Clear communication and pragmatic approach to problem solving - Fluent English Codzienne zadania: - Development of analytical and predictive models supporting digital and customer-related areas - Contribution to defining analytical priorities and data-driven recommendations - Collaboration with data and business teams to deliver scalable solutions
Data Science Engineer – pharmaceutical industry (f/m/x)
Sii Sp. z o.o.
⚲ Białystok, Centrum, Bydgoszcz, Gdańsk, Oliwa, Katowice, Kraków, Podgórze, Lublin, Łódź, Śródmieście, Piła, Poznań, Wilda, Rzeszów, Szczecin, Toruń, Warszawa, Mokotów, Wrocław, Fabryczna
Wymagania
- Python
- GitHub
- GitLab
- AWS
- Grafana
- Datadog
- GenAI
- RAG
Nasze wymagania: Minimum 5 years of experience in Python and working knowledge of JavaScript Understanding of version control systems (Git) and CI/CD pipelines Strong foundation in algorithms, data structures, and graph theory, including AST concepts Background in data engineering, including working with large codebases in on-prem and cloud (AWS) environments Familiarity with tools such as Artifactory, Prometheus, and Splunk Knowledge of AI/ML concepts, particularly GenAI, Graph RAG, and vector databases Ability to produce clear technical documentation and operate with an independent, exploratory problem-solving mindset Fluent Polish required Residing in Poland required O projekcie: We are looking for a Data Scientist with a strong engineering mindset to build the data foundation for software portfolio efficiency. This role focuses on extracting, structuring, and analysing data from code repositories, artifact management systems, and observability platforms. You will work at the intersection of data engineering, software architecture, and AI, leveraging modern approaches such as graph-based modelling and GenAI to map complex dependencies and improve visibility across a large-scale ecosystem of products and platform services. Zakres obowiązków: Extract and analyze data from code repositories (GitHub, GitLab) to build data-driven software metrics Use Abstract Syntax Trees (AST) and static analysis techniques to understand code structure, dependencies, and usage patterns Design and implement graph database solutions to model complex relationships across services and platforms Develop and enforce metadata standards (e.g., via Backstage/DevHub) to ensure consistency across systems Integrate data from observability and telemetry platforms (Grafana, DataDog, Prometheus, Splunk) to track adoption and performance Build algorithms to calculate software efficiency metrics based on defined criteria Apply AI/GenAI techniques (e.g., Graph RAG, vector databases) to enhance data retrieval, dependency mapping, and insights generation Oferujemy: Great Place to Work since 2015 - it’s thanks to feedback from our workers that we get this special title and constantly implement new ideas Employment stability - revenue of PLN 2.1BN, no debts, since 2006 on the market We share the profit with Workers - over PLN 76M has already been allocated for this aim since 2022 Attractive benefits package - private healthcare, benefits cafeteria platform, car discounts and more Comfortable workplace – class A offices or remote work Dozens of fascinating projects for prestigious brands from all over the world – you can change them thanks to Job Changer application PLN 1 000 000 per year for your ideas - with this amount, we support the passions and voluntary actions of our workers Investment in your growth – meetups, webinars, training platform and technology blog – you choose Fantastic atmosphere created by all Sii Power People
Data Platform Monitoring Specialist
AXA IT Solutions
⚲ Warszawa, Wola
7 500–12 000 zł brutto / mies.
Wymagania
- Python
- SQL
- Azure DevOps
- Azure Data Factory
- Azure Monitor
- Databricks
Nasze wymagania: Knowledge of the Microsoft Azure environment Proficiency in SQL (queries, data validation) Basic knowledge of Python (analysis / logic / debugging) Experience working with logs and monitoring systems Understanding of data pipelines (ETL/ELT) Familiarity with tools such as Azure DevOps and Jira Mile widziane: Experience with monitoring tools (e.g., Azure Monitor, Log Analytics) Experience with Azure Data Factory and Databricks Understanding of distributed systems and microservices Experience working in production environments (support / operations / NOC / SRE) O projekcie: The person in this role is responsible for the ongoing monitoring and stability of a data platform based on the Microsoft Azure ecosystem. The role focuses on ensuring continuity of data processing, rapid incident response, initial analysis, and coordination of issue resolution. Zakres obowiązków: Monitoring the health of data platform components (Azure, Databricks, Data Factory) Overseeing execution of data pipelines (ETL/ELT) Analyzing alerts and identifying anomalies in system behavior Verifying correctness of data processing Identifying, analyzing, and classifying incident Communicating incidents and statuses (impact, ETA, progress) Coordinating the incident resolution process Escalating issues to appropriate teams (Data Engineering, DevOps, Backend) Analyzing logs, metrics, and system dependencies Identifying root causes (data / pipeline / infrastructure / application) Attempting to resolve issues not caused by application defects (e.g., pipeline reruns, job restarts) Creating and maintaining documentation (runbooks, procedures) Participating in post-mortem analysis of incidents Proposing improvements in monitoring and operational processes
Data Engineer | f/m/d
ERGO Technology & Services S.A.
⚲ Gdańsk, Warszawa, Śródmieście
Wymagania
- Docker
- Databricks
- Azure DevOps
Nasze wymagania: fluency in English good knowledge of cloud solutions, preferably Azure Databricks strong SQL skills knowledge of ETL processes/programming communicative and open-minded person who is not afraid of challenges great understanding of Python & PySpark, and commercial use of it at least 3 years of experience as a Software Engineer using ML Ops, AI Model Operationalization, Python GenAI working experience with Javascript and .Net Zakres obowiązków: designing and implementing data models and architectures that support financial data management developing and maintaining ETL processes to integrate data from diverse sources, including market data feeds, customer information systems, and transactional databases ensuring data quality, security, and compliance with regulatory requirements collaborating with business owners, data analysts and scientists, SW engineers, and DevOps specialists to support the development of predictive models and business intelligence tools optimizing data processing workflows for performance, scalability, and reliability working in an international Scrum team Oferujemy: Let's be healthy – medical package, sports card, and numerous sports sections – these are some of the benefits that help our employees stay in good shape. Let's be balanced – work-life balance is a key aspect of a healthy workplace. We offer our employees flexible working hours, a confidential employee assistant program, as well as the possibility of remote working. However, staying at home with our in-office gaming room and dog-friendly office in Warsaw won’t be easy. Let's be smart – we organize numerous workshops and training courses. Thanks to hackathons and meetups, our specialists share their expertise with others. Additionally, we have a wide range of digital learning platforms and language courses. Let's be responsible – each year, we participate in several CSR activities, during which, together with our colleagues, we do our best to create a better future. Let's be fun – company-wide bike races and soccer matches, film marathons in our cinema room or other engaging team-building activities – we got it covered! Let's be diverse – every team member is valued, regardless of gender, nationality, religious beliefs, disability, age, and sexual orientation or identity. Your qualifications, experience, and mindset are our greatest benefit!
Data Engineer
ITMAGINATION
⚲ Remote
16 800 - 21 840 PLN (B2B)
Wymagania
- Python
- Spark
- SQL
- AWS
- ETL
O projekcie: Virtusa helps its Clients by becoming a true extension of their software and data development capabilities. Through the readily set up, comprehensive, and self-governing teams, we let our Clients focus on their business while we make sure that their software products and data tools scale up accordingly and with outstanding quality. We are looking for experienced team players to fill the position of Data Engineer and participate in our up-and-coming project for our client from banking area. Wymagania: - At least 3 years of experience in a specific set of tools: Spark/ Scala, Python, SQL - AWS expertise in services such as S3, Glue, Athena, EMR, EC2, and Lambda - ETL design and data modeling - Analytical thinking: to anticipate all edge cases scalable issues in future - Fluent English (B2+) Codzienne zadania: - Design, develop, test, and deploy data pipelines and data platforms using AWS services - Implement and maintain data lake, lake house, or data mesh architectures - Ensure compliance with data governance, security, and records management policies - Troubleshoot and resolve data-related issues, supporting change management processes
Collibra Data Quality Product Owner
YourITeams
⚲ Warszawa, Wrocław, Kraków, Gdańsk, Poznań
25 000 - 34 000 PLN netto (B2B)
Wymagania
- product owner
- Backlog Management
- Cost Estimation
We are seeking a skilled Collibra Data Quality Product Owner to join our team and contribute to the advancement of our Data Quality initiative. The person in this role will be responsible for designing and implementing Collibra Data Quality (DQ) scorecards while empowering internal teams to sustain and develop these solutions independently. This role requires close collaboration with data owners, data stewards, platform specialists, and other stakeholders to ensure that data quality controls are effective and enduring. Responsibilities • Serve as Product Owner for the Collibra DQ scorecard capability, managing backlog prioritization, stakeholder alignment, and contributing to roadmap planning and value realization for the platform. • Conduct cost estimation and price modelling related to development, onboarding, and operational usage of the data quality capabilities. • Support platform-related tasks such as configuration, access management, and troubleshooting as needed. • Contribute to platform governance, including best practices and architectural decision-making support. Qualifications • Experience serving in a Product Owner role or similar leadership position, including backlog management and stakeholder coordination. • Competence in cost estimation and price modelling related to platform capabilities or services. • Experience working with data stewards and data owners. • Physical presence within the European Union is required due to regulatory data-sharing requirements. Desirable qualifications: • Knowledge of Collibra Metadata Catalog platform technologies. • Experience working within large, complex enterprise environments. • Familiarity with Power BI, particularly in dashboard creation and data visualization. Benefits • Engagement in a nearshore consultancy model within a collaborative data governance and quality program. • Opportunity to contribute to both delivery and enablement, enhancing internal capabilities for sustainable data quality management. • Work within a professional environment that values strategic input and operational excellence. • Remote work • Flexible work arrangements supporting remote team management. • Unique TEAL culture, relationship- and respect-driven community, non-corporate atmosphere. • Agile approach and no bureaucracy. • Outstanding integration trips to various places in Europe for all employees. • Activities to support your well-being and health. Luxmed Gold Extended medical care and Multisport Plus benefit.
Azure Data Engineer
Verita HR
⚲ Kraków
25 000 - 30 000 PLN (B2B)
Wymagania
- Data engineering
- SQL
- MS SQL Server
- Azure SQL
- SSRS
- Azure Data Factory
- Azure Data Lake
- Power BI
- Data pipelines
- Azure
O projekcie: 📍 Client and contract: lead international bank | B2B 🗣️ Recruitment: phone screen with our recruiter + 2 on-line meetings with hiring managers 🗺️ Hybrid work: 6 days per month from the office in Cracow Verita HR is an international company providing recruitment support within #Fintech, #Finance and #Banking market in EMEA. We connect the most innovative organizations with the best people in the market. We conduct systematic market research, which allows our Digital Teams to be a step ahead of the competition. We are modernizing ~a dozen .NET applications and MSSQL databases to a modular, cloud‑native Azure architecture while continuing to support and evolve existing production UIs and services (ASP.NET MVC / Razor, legacy Angular, WPF, WinForms, Windows Services, SSIS). The goal is a unified quality strategy and reliable automation that supports incremental migration, reduces regressions in legacy flows and enables safe frequent delivery. What's in it for you? - Opportunity to work on complex and strategic projects in an international environment - Stable and long-term cooperation based on a B2B contract - Hybrid work (6 days per month from the office in Cracow) and flexible working hours - Working with modern IT technologies - Growth and development opportunities with the possibility to move between projects - Private healthcare and multisport card - Referral program, free parking and company events Wymagania: - 5+ years as a Data Engineer or in a similar role - Strong hands-on experience with MS SQL Server and Azure SQL - Designing, building and migrating ETL/ELT in Azure Data Factory and SSIS - Working with SSRS and Crystal Reports, modernizing legacy reporting - Experience with Azure Data Factory and Azure Data Lake / ADLS Gen2 - Good understanding of dimensional modeling and data warehouse design - Advanced SQL skills, complex queries, tuning and optimization - Designing end-to-end data solutions and contributing to SDD - Building and maintaining enterprise data pipelines, batch and streaming - Ensuring data quality, governance and compliance with banking/security standards - Integrating multiple data sources and collaborating with .NET teams and stakeholders - Familiarity with Power BI data models, refresh strategies and performance Codzienne zadania: - Designing, building and maintaining enterprise ETL/ELT pipelines in Azure Data Factory and related Azure services - Developing, modernizing and migrating ETL processes, including SSIS, to the cloud - Building and operating batch and near real-time or streaming workflows - Designing and evolving Data Warehouse and Data Lake architectures, including dimensional modeling - Optimizing and managing MS SQL Server and Azure SQL for performance and reliability - Supporting and modernizing reporting solutions like SSRS and Crystal Reports, integrating data from APIs, files and secure sources - Collaborating closely with .NET and API teams to ensure secure, reliable data flows - Contributing to solution and data architecture, creating and maintaining SDD - Ensuring data quality, governance and compliance with banking and regulatory standards - Supporting Power BI models, refresh processes and performance tuning
Specjalista / Specjalistka procesu w zespole Business Partner Master Data
SSAB Poland Sp.z o.o.
⚲ Wrocław, Psie Pole
Wymagania
- SQL
Nasze wymagania: Dobre zrozumienie procesów, integracji danych, atchitektury i narzędzi związanych z danymi podstawowymi, a także wpływu master daty na procesy biznesowe Praktyczna znajomość środowiska MS Office i języka SQL Biegła znajomość j. angielskiego (co najmniej B2) Dobre umiejętności komunikacyjne, perswazyjne, organizacyjne i interpersonalne Umiejętności rozwiązywania problemów, analityczne myślenie, radzenie sobie ze stresem Dbałość o detale, dotrzymywanie terminów i dokładność Dobry humor i entuzjazm do współpracy Wykształcenie wyższe Mile widziane: 2 lata doświadczenia w pracy z danymi podstawowymi, najlepiej przy użyciu systemów takich, jak MDM Informatica, SAP ERPs, CRM Doświadczenie w globalnym centrum usług oraz firmie produkcyjnej Zakres obowiązków: Praca w systemach MDM Informatica, SAP ERPs, CRM i innych Praca z generowaniem i analiz raportów w narzędziach: SAP ERP, sql i MS Excel Zabezpieczanie płynności i stabilności procesu Reagowanie na potrzeby rozwojowe i odchylenia procesowe Reprezentowanie zespołu na spotkaniach związanych z bieżącymi operacjami, projektami oraz rozwojem procesu i narzędzi Samodzielne poszukiwanie rozwiązań Silny głos wpływający na ustalanie procesów biznesowych Budowanie schematów procesów Koordynowanie zadań zespołu Zapewnianie właściwej komunikacji między zespołem a jednostkami biznesowymi w organizacji Utrzymywanie dobrych relacji biznesowych Rola merytoryczna w tematyce związanej z procesami biznesowymi, aplikacjami i projektami Bezpośrednia współpraca z Service Managerem, MDM Application Managerem, Key Userami i Process Specjalistami Wiodąca rola w realizacji projektów Odpowiedzialność za koordynację szkoleń użytkowników w zakresie standardów Master Data Management Pozytywne i nieinwazyjne przekonywanie użytkowników o walorach procesu i rozwiązań Wspieranie zespołu w aktualizacji dokumentacji dotyczącej procesu Zarządzanie dostępami użytkowników Oferujemy: Zatrudnienie w oparciu o umowę o pracę Wdrożenie do roli od kwietnia 2026 r. Udział w ciekawych, dynamicznych zadaniach z dostępem do nowoczesnych narzędzi technologicznych Wymianę wiedzy i doświadczenia Pakiet benefitów, np. Pakiet prywatnej opieki medycznej Dofinansowanie kart MultiSport Możliwość dołączenia do ubezpieczenia grupowego na preferencyjnych warunkach Dofinansowanie wypoczynku, świadczenia świąteczne Program poleceń pracowniczych Dostępne, bezpłatne miejsca parkingowe dla pracowników Interesującą, pełną wyzwań pracę Przyjazne środowisko pracy, koleżeńską atmosferę Możliwość rozwoju zawodowego Możliwość pracy w trybie hybrydowym
Senior Game Analyst
Link Group
⚲ Remote
15 000 - 20 000 PLN (B2B)
Wymagania
- Tableau
- Firebase
- BigQuery
- Excel
- SQL
- F2P (nice to have)
Wymagania: - minimum 4 lata doświadczenia w analizie danych w projektach produktowych w branży technologicznej lub rozrywkowej; - bardzo dobra znajomość narzędzi analitycznych i systemów zarządzania danymi (np. Tableau, Firebase Analytics z BigQuery, Excel); - naturalna ciekawość i umiejętność szukania odpowiedzi na pytanie „dlaczego” w danych; - świetne zdolności analityczne, umiejętność wizualizacji danych oraz prezentowania wniosków i rekomendacji; - proaktywność i kreatywne podejście do rozwiązywania problemów; - doświadczenie w pracy z SQL; - doskonałe umiejętności organizacyjne. Dodatkowo mile widziane: - doświadczenie przy projektach z modelem F2P; - doświadczenie w zarządzaniu zespołem. Codzienne zadania: - bliska współpraca z zespołem nad określaniem kierunków i planów rozwoju produktu; - bieżące monitorowanie kluczowych wskaźników, w tym aktywne dbanie o kompletność i jakość danych w systemach analitycznych; - przygotowywanie rekomendacji dla zespołu, inicjowanie zmian oraz ocenianie ich wpływu na zachowania użytkowników i wyniki finansowe; - analiza zachowań użytkowników, formułowanie hipotez do testów A/B i ciągłe poszukiwanie możliwości optymalizacji mechanik produktu; - realny wpływ na strategię cenową i ekonomię produktu; - współudział w rozwoju systemów raportowania i tworzeniu nowych wskaźników KPI; - ścisła współpraca z zespołem projektowym i zarządem przy przedstawianiu wyników wprowadzonych zmian, ich wpływu na kluczowe wskaźniki oraz dalszych rekomendacji.
Senior Data Engineer (MS Fabric)
Godel Technologies Europe
⚲ Białystok, Warszawa, Gdańsk, Łódź, Wrocław
18 480 - 27 720 PLN (B2B)
Wymagania
- Python
- MS Fabric
- SQL
- Power BI
- Azure
- Data modeling
- Data warehouses
- Cloud platform
- Azure SQL Database
- MS SQL Server
- Power Query
- Communication skills
- Data pipelines
- Databricks (nice to have)
- Azure Monitor (nice to have)
O projekcie: At Godel Technologies, we are passionate about building innovative software solutions that empower businesses around the world. We are growing and looking for a talented Data Engineer to join our team. If you are interested in working with modern data technologies, solving complex problems, and making an impact — we want to hear from you! As a Data Engineer, you will be part of a collaborative and agile environment where your insights and expertise will help shape business decisions. You will work with data in Microsoft Fabric and related Azure services to prepare, transform, and analyze information for reporting purposes, primarily using Power BI. You’ll collaborate closely with cross-functional teams, including business stakeholders, Data Analysts, and Architects, to deliver accurate, timely, and actionable reports and dashboards. Where needed, you may also support data ingestion and transformation pipelines to ensure data is ready for analysis. This is a hybrid role, which means we'd like you to work in the office occasionally, especially during client visits or other important company meetings. Project Tech Stack - Programming / Query Languages: SQL (advanced), basic Python (for data manipulation and automation) - Cloud Platforms: Microsoft Fabric, Azure (Azure Data Factory, Synapse Analytics, ADLS Gen2, Azure SQL Database, MS SQL Server) - Data Preparation & Transformation: Microsoft Fabric Dataflows & Pipelines, Azure Data Factory, Power Query, Spark; basic familiarity with dbt or similar tools is a plus - Databases: Azure SQL / MS SQL Server, PostgreSQL - Data Warehousing: Microsoft Fabric Lakehouse/Warehouse, Azure Synapse Analytics - Data Visualization & Analytics: Power BI (including integration with Microsoft Fabric), DAX, Power Query - Version Control: Git, Azure DevOps, GitHub Wymagania: Must have: - At least one year of experience working with Microsoft Fabric for data ingestion and transformation - Strong understanding of data modeling, data warehousing concepts, and preparing data for reporting and analytics - At least 3 years of experience in Python - Strong SQL skills, including writing complex queries, optimizing performance, and validating data accuracy - Proficiency in Power BI, including data transformation (Power Query), DAX, and report/dashboard design best practices - Knowledge of data security, privacy, and governance best practices (e.g., RBAC, data lineage, GDPR) - Familiarity with Agile methodologies and experience working in cross-functional teams - Strong analytical thinking, problem-solving skills, and attention to detail - Excellent communication and collaboration skills, with the ability to translate business needs into actionable insights - Strong verbal and written English communication skills Nice to have: - Databricks, familiarity with Delta Lake / Medallion Architecture - Monitoring/logging tools (Azure Monitor, Log Analytics) Codzienne zadania: - Prepare, transform, and model data within Microsoft Fabric to support reporting and analytics needs - Design and build Power BI reports and dashboards that deliver clear, actionable insights for business stakeholders - Collaborate with Data Analysts, Architects, and business teams to define data requirements and ensure data is ready for analysis - Perform data validation, quality checks, and reconciliation to ensure accuracy and reliability of reports - Where required, create and maintain simple data pipelines or ingestion processes to bring data into Fabric for reporting purposes - Participate in discussions on data architecture and reporting solutions to ensure scalability, performance, and best practices - Maintain compliance with security, integrity, and governance standards for all reporting data - Continuously improve report performance, usability, and user adoption through feedback and iteration
Senior Data Engineer
Antal
⚲ Kraków
33 600 - 40 320 PLN (B2B)
Wymagania
- PySpark
- Apache Spark
- Hadoop
- Hive
- Spark SQL
- Airflow
- Unix
- Linux
- ETL
- SQL
- REST API
- Git
- GitHub
- CD tools
- Ansible
- Jira
- Data modelling
- Elasticsearch (nice to have)
- Java API (nice to have)
- Cloud design patterns (nice to have)
- DevOps (nice to have)
- Kanban (nice to have)
- Scrum (nice to have)
O projekcie: Senior Data Engineer (PySpark) 📍 Location: Kraków, Poland (hybrid: 2 days in the office, 3 days remote) 📄 B2B Contract: 200 – 240 PLN / hour We are looking for a proactive Senior Data Engineer (PySpark) to join an international Agile team delivering large‑scale data processing solutions for the financial sector. You will work closely with Engineers, Data Analysts and Business Analysts to design, build and optimize data pipelines supporting critical business processes. This role sits within a collaborative engineering environment and offers the opportunity to influence architecture, development standards and delivery quality across a complex data ecosystem. Key Responsibilities: - Design and develop scalable data processing solutions using PySpark or Scala. - Build and maintain data pipelines leveraging Hadoop ecosystem components (Spark, Hive, YARN, MapReduce). - Promote engineering best practices through code reviews, mentoring and knowledge sharing. - Provide production support, troubleshoot issues and ensure system stability. - Implement tools and processes ensuring performance, availability, accuracy and monitoring. - Collaborate with Business Analysts to ensure correct interpretation and implementation of requirements. - Contribute to architecture and system design discussions. - Participate in Agile ceremonies: sprint planning, reviews, retrospectives and daily stand‑ups. What We Offer: - Long‑term cooperation within a large, international financial programme. - Real influence on architecture, engineering standards and delivery quality. - Work in a multicultural, distributed environment. - Private medical care (LuxMed) and Multisport card. - Stable B2B contract with attractive hourly rate. Wymagania: - 5+ years of experience in software engineering with strong focus on PySpark development. - Hands‑on experience with Apache Spark, Hadoop, Hive, Spark SQL. - Experience with Airflow or similar scheduling tools. - Solid understanding of Unix/Linux environments. - Experience with ETL frameworks, SQL, RESTful services and distributed data processing. - Familiarity with Git/GitHub, CI/CD tools (Jenkins, Ansible) and JIRA. - Understanding of data modelling (relational & non‑relational). - Strong debugging skills and ability to clearly communicate findings. Nice to Have: - Experience with Elasticsearch. - Java API development. - Experience with ingestion frameworks. - Understanding of cloud design patterns. - Exposure to DevOps practices and Agile methodologies (Scrum, Kanban).
SCM Analyst
Sandoz Polska Sp. z o.o.
⚲ Warszawa
7 400–13 733 zł brutto / mies.
Nasze wymagania: Doświadczenie w pracy z systemami klasy ERP (szczególnie SAP) Mile widziana wiedza w zakresie programowania/pracy z kodem Znajomość narzędzi analitycznych (Excel, mile widziana znajomość Power BI) Dobra znajomość języka angielskiego (min. B2) Umiejętność pracy w zespole i komunikacji z interesariuszami technicznymi i biznesowymi. Zakres obowiązków: Współpraca z zespołami IT oraz interesariuszami w celu zapewnienia zgodności z politykami i standardami danych; Utrzymywanie i rozwijanie wiedzy na temat modeli danych i architektury systemów; Działalność operacyjna w zakresie zapewnienia dostępności produktów; Współpraca z klientami wewnętrznymi i zewnętrznymi; Tworzenia i aktualizacja narzędzi analitycznych i raportów; Analiza danych oraz przygotowywanie rekomendacji dla procesów; Aktualizacja i zarządzanie danymi w systemach, praca z danymi w systemie SAP. Oferujemy: Karta lunchowa o wartości 450 zł miesięcznie; Laptop, telefon komórkowy, Opieka medyczna; Ubezpieczenie na życie; Plan emerytalny; Multisport lub karta Kaltura+ (współfinansowana przez pracownika); Dodatkowe 2 dni wolne; Platforma do nauki języków.
Mid Data Scientist (Generative AI)
Devire
⚲ Warszawa
21 840 - 28 560 PLN (B2B)
Wymagania
- Machine Learning
- AI
- NLP
- Python
- SQL
- Azure
- GCP
- NumPy
- PyTorch
- API
- Databricks (nice to have)
- Computer vision (nice to have)
O projekcie: Devire Outsourcing IT to forma współpracy dedykowana dla specjalistów z branży IT, oparta na zasadach własnej działalności gospodarczej - B2B, realizująca projekty dla wiodących Klientów prowadzących innowacyjne i nowoczesne projekty. Dla naszego Klienta - organizacji specjalizującej się w rozwiązaniach data & analytics oraz wspierającej globalne firmy w transformacji cyfrowej i wdrażaniu AI - poszukujemy osoby na stanowisko Mid Data Scientist (Generative AI). Lokalizacja: 100% zdalnie Wynagrodzenie: 130-170 PLN netto+VAT/h; B2B via Devire Start: ASAP - Wynagrodzenie do 170 PLN netto+VAT/h, B2B via Devire - Praca 100% zdalna - Pakiet benefitów (opieka medyczna, karta multisport itp.) - Międzynarodowe środowisko Wymagania: Must have: - Minimum 4,5 roku doświadczenia w roli Data Scientist - Znajomość podstaw teoretycznych deep learningu - Doświadczenie w obszarze Machine Learning, szczególnie Generative AI (LLM/LMM), NLP lub modeli multimodalnych - Doświadczenie w zbieraniu wymagań biznesowych i przekładaniu ich na rozwiązania techniczne (w tym: przetwarzanie danych, feature engineering, ewaluacja modeli, testowanie hipotez, wdrażanie modeli) - Bardzo dobra znajomość Pythona oraz programowania obiektowego, znajomość SQL i baz wektorowych - Doświadczenie w pracy z chmurą (Azure lub GCP) - Znajomość bibliotek i narzędzi: NumPy, PyTorch, Hugging Face, LangChain, LangGraph oraz API GenAI (np. OpenAI, Gemini) - Praktyczne doświadczenie w pracy z MCP (projektowanie lub obsługa serwerów/klientów dla agentów LLM) - Poziom języka angielskiego na poziomie min. B2 Nice to have: - Doświadczenie z Databricks - Komercyjne doświadczenie w projektach GenAI, NLP lub Computer Vision - Znajomość architektury mikroserwisowej - Doświadczenie z repozytoriami kodu i narzędziami wspierającymi programowanie - Umiejętności biznesowe i zdolność proponowania kreatywnych rozwiązań - Doświadczenie w pracy zespołowej i mentoringu młodszych członków zespołu Codzienne zadania: - Projektowanie i rozwój kompleksowych aplikacji GenAI (np. chatboty, voiceboty, systemy Talk-to-Data), obejmujących przetwarzanie danych, warstwę wyszukiwania (retrieval), orkiestrację, backend/API oraz proste interfejsy użytkownika - Tworzenie i optymalizacja pipeline’ów RAG z wykorzystaniem baz wektorowych, wyszukiwania hybrydowego, rerankerów oraz mechanizmów ewaluacji jakości - Dobór modeli, projektowanie promptów oraz fine-tuning (LoRA, QLoRA, SFT) dla modeli tekstowych, kodowych i multimodalnych - Integracja i optymalizacja współpracy modeli LLM z narzędziami, API i źródłami danych (w tym z wykorzystaniem MCP) - Przekładanie potrzeb biznesowych na rozwiązania techniczne, definiowanie metryk sukcesu oraz ocena dostępności i jakości danych - Wsparcie realizacji projektów oraz udział w działaniach pre-sales
IT Internal Auditor-Technology
U.S.Bank | Elavon
⚲ Warszawa, Mokotów
Wymagania
- Python
- JSON
- SQL
- Amazon Web Services
- Azure
- COBIT
- COSO
- CCM
- NIST CSF
Nasze wymagania: 6+ years work experience in technology audit and ITGC Testing Working knowledge and experience in python, JSON and SQL Experience in auditing IT cloud operations, network, infrastructure, and security preferably related to Amazon Web Services and Azure Experience in IT security and IT governance risk and compliance Strong understanding of cybersecurity processes and concepts (e.g. vulnerability management, security governance, software development, incident response, physical security, auditing and logging, micro segmentation, secure access service edges, zero trust architecture, PKI, penetration testing) as well as application controls Experience in auditing, compliance, and/or risk FFIEC, GLBA, GDPR and PCI Working knowledge and experience with professional standards including CCM, NIST CSF, COSO and COBIT Excellent listening and communication skills in both written and verbal forms; previous experience in writing internal audit reports, preferred Strong analytical, interpersonal and communication skills Current certifications, such as CISA or CISSP Demonstrate and apply a thorough understanding of complex information systems. Demonstrate and apply strong project management, time management and organizational skills Must possess an understanding of Information Security policies and standards, and have a working knowledge of Business Continuity Programs, electronic banking software and applications, Cloud computing, Cybersecurity Regulatory Framework, and Vendor Management practices Must be proficient using Microsoft Office software Ability to operate independently and perform quality work within the scheduled timeframe. O projekcie: US Bank Europe is seeking a highly motivated Information Technology (IT) Controls Tester to join our Business Line Quality Assurance (BLQA) program. This role is critical to helping the company identify and address compliance, financial, operational, strategic and technology risks in technology processes. The work requires proficiency in the areas of internal ITGC control testing/IT auditing. The role will focus on robust planning and execution of technology control testing. Zakres obowiązków: Independently validate the design and operational effectiveness of IT General Controls and Cloud controls Perform control procedure and documentation reviews including conducting interviews to clarify processes, data flows and architectures Prepare test scripts Perform root cause and impact analysis and provide management with recommendations to resolve issued findings. Advise business partners on IT findings, risks and control weaknesses. Validate findings post remediation Use knowledge of the current IT environment and industry IT trends to help identify and anticipate potential issues that may impact the banks risk landscape Design and assist in building continuous monitoring/reporting to improve efficiency an awareness of control testing activities Provide technical assistance on audit techniques Maintain an understanding of the cybersecurity footprint, platform architecture, cloud infrastructure, data governance and privacy compliance, general computing control structure of the Company (systems and architecture) and be able to apply that knowledge to how it supports the processes and procedures being reviewed Develop and maintain strong and effective working relationships with key business partners Proactively engage and follow up to ensure deliverables are met, and identified gaps have been communicated Oferujemy: This role requires working from a U.S. Bank location three (3) or more days per week.
Graph Database Engineer
Green Minds Sp. z o.o.
⚲ Warszawa
210–250 zł netto (+ VAT) / godz.
Wymagania
- Neo4j
- Cypher
- GraphQL
- Graph Theory
- REST API
- Git
- Java
Nasze wymagania: 3+ years of experience in software engineering 2+ years of hands-on experience with Neo4j and Cypher Strong knowledge of graph theory concepts Java development experience Experience with graph data modeling and performance optimization Familiarity with cloud environments (Azure / AWS / GCP) English B2+ Polish Mile widziane: Experience with distributed systems or microservices CI/CD and DevOps practices Knowledge graphs or ML integration Academic background or certifications (Neo4j / Cloud) O projekcie: We’re taking over and continuing the development of a Neo4j-based web platform. The platform processes complex, interconnected datasets used in real analytical workflows, enabling you to work with real-world graph structures at scale. You will have a direct impact on architectural decisions during the transition and scaling phase, not just maintain legacy code. It’s a rare opportunity to shape a production-grade Neo4j environment that is actively used by global teams. The project is in the transition and scaling phase, focused on knowledge transfer and further feature development. It’s a long-term international project (min. 2 years), offering stable collaboration and room for technical growth. The scope of responsibilities and technical ownership will be adjusted depending on the engineer’s experience level and the project phase. Zakres obowiązków: Design and maintain graph data models using Neo4j Develop and optimize Cypher queries and Java integrations Improve database performance, indexing, and scalability Participate in architectural decisions during transition and scaling phase Collaborate with data scientists and backend teams on analytics features Participate in knowledge transfer and onboarding (as the project scales) Ensure documentation, data security and operational stability Oferujemy: 100% remote work with an international, English-speaking team Long-term B2B contract Regular working hours (9:00–17:00 CET) Involvement in a high-impact, technically advanced project A collaborative environment where your expertise really counts Sounds like a good match? Apply now and help us build the next generation of graph-based data systems.
F/M: Platform Engineer (Azure / Databricks)
PEOPLEVIBE SPÓŁKA Z OGRANICZONĄ ODPOWIEDZIALNOŚCIĄ
⚲ Warszawa
190–220 zł netto (+ VAT) / godz.
Wymagania
- Databricks
- Microsoft Azure
- Terraform
- Python
- SQL
- Apache Spark
- Azure DevOps
- GitHub
- Kafka
Nasze wymagania: doświadczenie w pracy z Databricks (praktyczne, nie tylko teoretyczne) doświadczenie w budowie lub utrzymaniu Data Lakehouse dobra znajomość Azure doświadczenie z Terraform (IaC) praktyczna znajomość CI/CD (Azure DevOps lub GitHub) znajomość Python i SQL doświadczenie w pracy z Apache Spark certyfikaty: Microsoft Azure (Administrator, Developer, Security Engineer, Network Engineer, Solutions Architect, Database Administrator, AI Engineer, Fabric Data Engineer) Databricks (Data Engineer Professional, ML Engineer, Generative AI, Data Analyst, Apache Spark) Kubernetes (CKAD) dbt (Analytics Engineering, Architect) Microsoft MVP (Azure, Data Platform), Databricks MVP Mile widziane: doświadczenie z dbt praca z Unity Catalog doświadczenie w zarządzaniu kosztami Databricks doświadczenie w obszarze data governance / access management znajomość integracji (API / Kafka) doświadczenie z rozwiązaniami AI w środowisku produkcyjnym O projekcie: Projekt realizowany dla skandynawskiej organizacji odpowiedzialnej za stabilność i rozwój infrastruktury elektroenergetycznej na poziomie krajowym. Zespół pracuje nad nowoczesną platformą danych wspierającą zarządzanie systemem energetycznym i transformację w kierunku przyszłościowych rozwiązań. Szukamy osoby, która realnie budowała i rozwijała środowiska data platform — nie tylko pisała pipeline’y, ale rozumie całość: architekturę, governance, dostęp i optymalizację. Zakres obowiązków: rozwój i utrzymanie środowiska Data Lakehouse budowa i rozwój środowisk Databricks (development + monitoring) projektowanie i rozwój CI/CD pipelines (Azure DevOps / GitHub) zarządzanie infrastrukturą jako kod (Terraform) zarządzanie dostępami i governance (w tym Unity Catalog) optymalizacja kosztów i wydajności środowiska Databricks współpraca przy integracjach (API / Kafka / streaming) Oferujemy: projekt o dużej skali i realnym wpływie na infrastrukturę krytyczną nowoczesny stack technologiczny (Azure + Databricks + Lakehouse) rola z realnym wpływem na architekturę i rozwój platformy środowisko nastawione na jakość
DevOps Engineer (Data)
UPVANTA SPÓŁKA Z OGRANICZONĄ ODPOWIEDZIALNOŚCIĄ
⚲ Wrocław, Fabryczna
21 000–24 000 zł netto (+ VAT) / mies.
Wymagania
- AWS
- Databricks
- Jenkins
- Python
- Linux
Nasze wymagania: Doświadczenie na stanowisku DevOps Engineer minimum 5 lat Praktyczna znajomość: AWS / Databricks / Jenkins Doświadczenie w pracy z danymi lub projektami data (data pipelines, big data) Znajomość Python Umiejętność automatyzacji procesów i pracy z CI/CD Dobra znajomość systemów Linux O projekcie: Poszukujemy DevOps Engineera z doświadczeniem w obszarze danych, który dołączy do zespołu pracującego nad nowoczesnymi rozwiązaniami data-driven. Projekt obejmuje rozwój i utrzymanie środowisk opartych o Databricks oraz AWS, ze szczególnym naciskiem na automatyzację i optymalizację procesów. Zakres obowiązków: Zarządzanie i rozwój infrastruktury DevOps w środowisku AWS Praca z platformą Databricks (deployment, konfiguracja, optymalizacja) Tworzenie i utrzymanie pipeline’ów CI/CD (Jenkins) Automatyzacja procesów (Infrastructure as Code, skrypty) Monitorowanie, optymalizacja i utrzymanie środowisk produkcyjnych
Data platform engineer (senior) - Nykredit
BEC Poland
⚲ Warszawa, Śródmieście
Nasze wymagania: Solid experience with Microsoft Fabric or strong familiarity with Azure Data Lake, Synapse, ADF, or other modern cloud data platforms. Strong data engineering fundamentals, such as ingestion patterns, ELT/ETL, data modeling, storage layers, orchestration, and event-driven architectures. Hands-on coding and scripting skills in C#, Python, SQL, and Terraform (or similar IaC tools). An operational mindset, including experience with observability, SLIs/SLOs, incident response, troubleshooting, and automation. Experience with DevOps practices—CI/CD pipelines, Git workflows, and automated deployments. Monitoring experience using Azure Monitor and similar tools. Strong understanding of cloud security and compliance, including access management, classification, lineage, and governance principles. Excellent collaboration and communication skills, enabling you to work effectively with engineers, architects, product owners, and business stakeholders. Fluent English (written and spoken) Mile widziane: Experience with other cloud data platforms such as Snowflake or Databricks. Knowledge of Delta Lake, Spark, or streaming architectures. Familiarity with Fabric Capacity management and workload optimization. Experience with Microsoft Purview (governance, lineage, metadata) O projekcie: For this role, we are looking for someone to join Nykredit – one of the banks BEC Poland works closely with. Nykredit is both a bank and a mortgage provider owned by an association of customers. Being customer-owned, they are in a unique position to give something back to their customers when they are doing well. Together they can simply do more for their customers and for society. This position is based at our friendly office in Warsaw and you will join our Wealth Technology area. At Nykredit, we are experiencing strong progress and, in the spring of 2024, we launched our Wealth Management platform program, where we are re‑implementing our core portfolio management system, SimCorp Dimension (SaaS). As part of this initiative, we are ensuring that all surrounding solutions are fit for purpose and built to integrate seamlessly with the new core system. We envision our future in Azure, built on serverless and event‑driven components. Data plays a central role in this transformation, and our strategy is firmly rooted in delivering a modern, cloud‑native platform. We are now expanding our team and seeking a Data platform engineer to support, develop, and operate our implementation of Microsoft Fabric At Nykredit, we prefer to collaborate often in the office, but we also keep the opportunity to work remotely up to eight days per month. Your direct manager will be Søren Valther Rasmussen. Zakres obowiązków: Develop, operate, and maintain Nykredit’s Microsoft Fabric–based data platform, ensuring reliability, performance, and scalability across ingestion, processing, storage, and delivery workloads. Build and optimize data pipelines, both batch and event‑driven, enabling efficient integration with SimCorp Dimension SaaS and other business-critical systems. Implement reusable platform patterns for ingestion, transformation, governance, lineage, and observability to support onboarding of new teams and workloads. Take ownership of day‑to‑day operations, including monitoring Fabric capacities, troubleshooting failures, cost management, incident triage, and driving operational excellence. Collaborate with architects, cloud engineers, and product teams to design secure and scalable integrations across Azure and Fabric. Automate operational workflows through CI/CD, Infrastructure as Code, and scripting tools. Ensure compliance with governance and security requirements, including access controls, auditability, and data protection standards. Contribute to the evolution towards a fully serverless, event-driven architecture, helping define future patterns, technical strategy, and engineering best practices. Oferujemy: Flexible working hours (start between 7:00 and 9:30) Mental health support Free lunch at the office Professional development Referral bonus up to PLN 10,000 PLN 600 on a benefit platform a month Passion clubs and social events (Family picnics, salsa dancing, board games and more!)
Data Engineer with Databricks
Spyrosoft
⚲ Warszawa, Białystok, Kraków, Wroclaw
16 800 - 21 840 PLN (B2B)
Wymagania
- Data engineering
- SQL
- Excel
- Python
- Apache Spark
- Microsoft Azure
- Databricks
- Power BI
- ETL
- Data Lake
- Data warehouses
- Blob Storage
- AI
O projekcie: Project description: Design, build, and maintain scalable data platforms that transform complex, unstructured R&D data into reliable, structured, and analytics-ready assets. This role operates at the intersection of data engineering, scientific research, and digital innovation, offering a direct impact on how data is captured, structured, governed, and utilized across the client’s laboratories worldwide. Tech stack: - SQL - Excel - Python - Microsoft Azure - Azure Blob Storage - Databricks - Power Apps - Power BI Wymagania: Requirements: - Bachelor's or Master's degree in Computer Science, Engineering, Data Science, or related field - ~3 years of progressive experience in data engineering, preferably in R&D, energy, chemical, or similar sectors Proficiency: - SQL databases - Excel - Python coding - Hands-on experience with distributed data processing frameworks (e.g., Apache Spark, Databricks) - Hands-on experience with Microsoft Azure (including Azure Blob Storage and Databricks) Nice to have: - Knowledge of Power Apps - Power BI - Web UI development skills Codzienne zadania: - Transform unstructured and semi-structured R&D data into standardized, governed, and analytics-ready data assets. - Design, build, and operate scalable ETL/ELT pipelines, data lakes, and data warehouses with strong data quality, monitoring, and lineage. - Modernize legacy data flows through APIs and integration patterns within the Microsoft Azure ecosystem (e.g., Databricks, Blob Storage). - Optimize cloud infrastructure for performance, security, and cost efficiency while supporting enterprise data governance and compliance. - Partner with Data Scientists, Analysts, R&D scientists, IT teams, and vendors to enable ML/AI use cases and smart laboratory initiatives. - Identify siloed data sources and propose strategies to extract value through connectivity - Collaborate with cross-functional teams to design, implement, and maintain web applications using the newly structured data. - Contribute to Innovation Excellence standards, reusable components, and technical knowledge sharing across the whole company.
Data Engineer (mid/senior)
Harvey Nash Technology
⚲ Warszawa
24 000 - 35 000 PLN (PERMANENT)
Wymagania
- Python
- graph databases (nice to have)
- Spark (nice to have)
- Scala (nice to have)
O projekcie: For our Client we are currently looking for Data Engineer with strong experience with Python Development. Warsaw, hybrid Umowa o Pracę - full time employment 24000-27000 brutto/mies Wymagania: - 3+ years of experience in Data Engineering or a related field. - Strong hands‑on experience with Python development. - Experience working with Spark or Scala. - Demonstrated interest or experience in big data technologies. - Ability to design creative and innovative technical solutions. - Knowledge of graph databases is an advantage. Codzienne zadania: - Develop and maintain big data processing pipelines for ingestion, transformation, and delivery. - Build and enhance cloud-based infrastructure using distributed systems techniques. - Collaborate with investment professionals, researchers, and data scientists to design and launch end‑to‑end data solutions. - Drive innovation by evaluating and conducting POCs for emerging big data technologies. - Create and support data visualization and exploration tooling for large structured and unstructured datasets.
Data Engineer (Scala / Spark / Azure)
AXA IT Solutions
⚲ Warszawa, Wola
120–165 zł netto (+ VAT) / godz.
Wymagania
- Scala
- Apache Spark
- Azure Data Factory
- Databricks
- CI/CD
Nasze wymagania: Hands-on experience with Scala (at least one commercial or substantial project; expert level not required) Experience working with Apache Spark Familiarity with Databricks for data processing and development Experience with Azure Data Factory (ADF) or similar orchestration tools Understanding of data pipeline concepts, ETL/ELT processes, and data modeling basics Proactive and communicative approach to work Good command of English (written and spoken) is required Ability to work with existing codebases and adapt to changing requirements Mile widziane: Experience with Azure ecosystem components Exposure to performance tuning in Spark Familiarity with CI/CD practices in data engineering O projekcie: We are looking for a Data Engineer to join our team to support the development and maintenance of data pipelines built on a Azure-based stack. The role focuses primarily on implementing data model changes, maintaining existing transformations, and ensuring the reliability and performance of data processing workflows. Zakres obowiązków: Implement new data structures by adding tables and columns based on provided specifications Maintain and troubleshoot existing pipelines, including identifying and fixing bugs Modify and enhance data transformations in response to evolving business requirements Optimize data processing workflows to improve performance and efficiency Collaborate with other team members to ensure data consistency and quality Use Databricks notebooks for debugging, testing, and validating data processing logic Oferujemy: Opportunity to work on scalable data processing systems in a cloud environment Collaborative team focused on practical problem-solving and continuous improvement Exposure to modern data engineering tools and architecture
Data Engineer (Databricks)
IN4GE sp. z o.o.
⚲ Warszawa
160–200 zł netto (+ VAT) / godz.
Wymagania
- Python
- PySpark
- Databricks
- Delta Lake
- SQL
- Azure Data Factory
- Azure DevOps
Nasze wymagania: Bardzo dobra znajomość Python oraz PySpark. Doświadczenie w pracy z Databricks oraz Delta Lake. Bardzo dobra znajomość SQL i umiejętność analizy złożonej logiki bazodanowej. Doświadczenie w projektach data engineeringowych. Znajomość zasad clean code oraz OOP. Doświadczenie w pracy z dużymi repozytoriami kodu. Doświadczenie w modelowaniu danych. Bardzo dobra znajomość języka angielskiego. Mile widziane: Doświadczenie z Azure Data Factory. Znajomość CI/CD oraz Azure DevOps. Doświadczenie z narzędziami analitycznymi. Znajomość zagadnień DevOps lub infrastrukturalnych. O projekcie: Dla naszego Klienta poszukujemy doświadczonej osoby w roli Senior Data Engineer do długofalowego projektu związanego z transformacją platformy danych w międzynarodowym środowisku ubezpieczeniowym. Projekt koncentruje się na migracji rozwiązań opartych o SQL Server do nowoczesnej platformy Databricks oraz budowie skalowalnych procesów data engineeringowych w środowisku cloud. Zakres obowiązków: Analiza oraz refaktoryzacja złożonej logiki SQL (stored procedures). Implementacja transformacji danych w Python oraz PySpark. Projektowanie i rozwój modeli danych w architekturze Medallion (Bronze / Silver / Gold). Tworzenie skalowalnych procesów transformacji danych w środowisku Databricks. Dbanie o jakość kodu, jego modularność oraz utrzymywalność. Udział w code review oraz decyzjach architektonicznych. Praca z dużymi wolumenami danych w środowisku enterprise. Oferujemy: Współpracę w modelu 100% zdalnym. Długofalowy projekt w międzynarodowym środowisku. Współpracę w modelu B2B. Pracę przy nowoczesnej platformie danych. Udział w dużym projekcie transformacji technologicznej.
Data Engineer
Antal
⚲ Kraków
31 080 - 36 120 PLN (B2B)
Wymagania
- Spark
- Spark SQL
- Airflow
- Unix
- Linux
- Apache Hadoop
- Apache Spark
- Yarn
- Hive
- Python
- ETL
- SQL
- REST API
- Git
- GitHub
- Ansible
- Jenkins
- Scala
- Data pipelines
- Hadoop
- Jira
- Elasticsearch (nice to have)
- Java API (nice to have)
- Cloud platform (nice to have)
- Design Patterns (nice to have)
- DevOps (nice to have)
- Kanban (nice to have)
O projekcie: 📍 Location: Kraków, Poland (hybrid: 6 days per month in the office)📄 B2B Contract: 185 - 215 PLN / hour We are looking for a Data Engineer to join an international project delivered for a large organization in the financial services sector. The team is responsible for building and maintaining scalable data pipelines, supporting analytical and business teams, and ensuring high-quality data for strategic decision-making. This role involves working with modern big data technologies and distributed systems, contributing to data engineering best practices, and collaborating closely with analysts and engineers. The scope of responsibilities will be adjusted based on the candidate’s experience and technical profile. Key Responsibilities: - Collaborate with Engineers, Data Analysts, and Business Analysts to design, develop, and operate data pipelines and systems. - Implement tools and processes to handle performance, scale, availability, accuracy, and monitoring of data platforms. - Build and maintain ETL processes, using Hadoop, Spark, Hive, and other big data components. - Ensure requirements are correctly interpreted and implemented by liaising with BAs. - Promote development standards through code reviews, mentoring, and knowledge sharing. - Participate in Agile ceremonies, including sprint planning, reviews, and retrospectives. - Provide production support, troubleshoot issues, and contribute to improving system reliability. - Contribute to system architecture and design decisions. What We Offer: - Long-term cooperation on an enterprise-scale international project. - Exposure to modern technologies and microservices-based architectures. - Real influence on the development of internal engineering platforms. - Private medical care (LuxMed) & Multisport card. Wymagania: - 5+ years of experience in software design and development, preferably in Pyspark or Scala. - Hands-on experience building data pipelines using Hadoop ecosystem components (Hive, Spark, Spark SQL). - Experience with scheduling tools such as Airflow. - Strong knowledge of Unix/Linux platforms. - Experience with big data frameworks: Apache Hadoop, Apache Spark, YARN, Hive, Python, ETL frameworks, MapReduce, SQL, RESTful services. - Familiarity with version control (Git/GitHub), automated deployment tools (Ansible, Jenkins), and requirement management (Jira). - Strong analytical and debugging skills. Nice to Have: - Experience with Elasticsearch and Java API development. - Knowledge of cloud platforms and cloud design patterns. - Experience with data ingestion processes. - Exposure to DevOps and Agile methodologies (Scrum, Kanban).