Oferty pracy
Filtry
Integration Engineer
Xceedance Consulting Polska Sp. z o.o.
⚲ Kraków
Wymagania
- AWS
- API
- Microsoft Azure Cloud
- Python
- NoSQL
The Advanced Intelligence Work Group is seeking an Integration Engineer to design, build, and maintain robust backend integrations that power data flow across systems, applications, and AI-driven copilots. This role is critical for enabling seamless connectivity between databases, third-party services, APIs, and internal tools, while occasionally leveraging no-code platforms to streamline workflows. Role & Responsibilities: • Design, develop, and maintain RESTful APIs for internal and external systems. • Integrate third-party APIs (e.g., CRM, SaaS tools) with company applications and AI copilots. • Build and optimize data pipelines to ensure reliable, real-time data flow between databases, applications, and analytics platforms. • Implement and manage ETL/ELT processes for data transformation and synchronization. • Develop automated workflows using no-code/low-code orchestration tools (e.g., n8n, Zapier, Make.com) to streamline business processes. • Collaborate with AI/ML teams to ensure data availability and quality for training and inference in copilots. • Monitor, troubleshoot, and enhance API performance, security, and scalability. • Maintain documentation for integrations, data flows, and system architecture. • Collaborate with DevSecOps and compliance teams to ensure data security and compliance with data privacy regulations (e.g., GDPR, CCPA) in all integrations. Skills & Expertise: • Required: • Proficiency in API design, development, and management (FastAPI, OpenAPI/Swagger). • Experience with relational and NoSQL databases (e.g., PostgreSQL, MySQL, MongoDB, DynamoDB). • Hands-on experience with cloud platforms (AWS, Azure, GCP) and serverless architectures (Lambda, API Gateway). • Familiarity with ETL tools and data pipeline frameworks. • Good familiarity with no-code/low-code workflow automation platforms (n8n, Zapier). • Strong scripting skills (Python, JavaScript, or Go) for custom integrations. • Knowledge of authentication/authorization protocols (OAuth2, JWT, SAML). • Understanding of data modeling, schema design, and API security best practices. • • Nice to Have: • Experience with message brokers (Kafka, RabbitMQ) and event-driven architectures. • Familiarity with API gateways (Kong, Apigee), monitoring tools (Postman, Datadog) and structured logging (Logfire, Logstash) • Knowledge of data governance and metadata management. • Certifications in cloud platforms or integration tools (AWS Certified Developer, Zapier Expert). Qualifications: • Bachelor’s degree in Computer Science, Information Systems, or a related field (or equivalent experience). • 5+ years of experience in API development, data integration, or backend engineering roles. • Proven track record of building scalable integrations with third-party systems and no-code platforms. • Strong problem-solving skills and ability to work in cross-functional teams (Engineering, Data Science, Product). • GSD (Get Stuff Done) attitude is critical.
Database Engineer
Grafton Recruitment
⚲ Częstochowa
7 500 - 12 500 PLN brutto (UoP)
Wymagania
- DWH
- ETL
- Data Warehousing
- Azure
- Relational Databases
- MS SQL
- SQL
We have a great opportunity for you to grow your database, data‑warehouse, and ETL engineering skills while contributing to innovation in the logistics industry. You will join an international team focused on enhancing the current data landscape through Azure Cloud solutions and new AI‑driven initiatives. You will focus on: • Building and maintaining Azure‑based data platforms and data warehouses, • Designing SQL databases and developing efficient ETL pipelines, • Creating lightweight business applications in Power Apps and connecting them with the broader Microsoft ecosystem, • Supporting analytics and reporting initiatives through Power BI, • Experimenting with emerging technologies, including machine learning and AI solutions, • Overseeing data quality, security, and overall system performance, • Preparing documentation and actively sharing knowledge across the team. What do you bring? • Hands‑on experience working with Microsoft Azure–based data warehouses, • Strong proficiency in SQL and relational database technologies, • Practical experience building low‑code business applications using Power Apps, • Genuine interest in AI and machine learning technologies, • Solid understanding of cloud architecture and modern data platform principles, • A bachelor’s degree in ICT, Computer Science, Data Engineering, or a related field, • Good command of English. This is a hybrid role with three remote days per week (two during the onboarding period). Employment benefits include private medical insurance and a sports card.
Data Engineer (NXJ-84)
Newxel
⚲ Warszawa
Wymagania
- DWH
- ETL
- Kubernetes
- Data modeling
- SQL
- Python
We’re seeking a skilled and driven Data Engineer to join a fast-moving team building modern, cloud-scale data and BI solutions. In this role, you’ll design and develop robust data pipelines, ETL processes, and data models that transform raw data into trusted insights. You’ll deliver high-performance, scalable data infrastructure that powers reporting and analytics across the organization—helping teams make faster, smarter, and more reliable decisions. Responsibilities • Design and development of scalable and efficient Data Pipelines and Models • Develop and implement ETL processes to ensure efficient loading of data from various sources • Implement best practices for data pipelines, data warehouse and database performance tuning • Execute end-to-end BI and data engineering solutions using leading frameworks and technologies • Serve as the primary Data focal point for internal customers • Write high-quality, maintainable code in a rich data environment • Work closely with Product, R&D, Operations, and Business teams to ensure seamless integration and delivery of BI and Data solutions Requirements • BSc in Computer Science or an equivalent background • 3+ years of prior experience as a BI Engineer or Data Engineer • Experience building cloud-scalable, real-time and high-performance data infrastructure and pipelines solutions • Excellent proficiency in Python or Airflow – must • Proficient in data modeling, ETL development and DWH methodologies • Strong SQL skills and experience working with Snowflake or similar DB • Experience with Docker, Kubernetes, and modern CI/CD practices – big advantage • Excellent communication and collaboration skills with experience working across multiple stakeholders and business units • Excellent communication and collaboration skills What we offer • Competitive salary range • Medical insurance • Paid vacation and sick leaves • MultiSport card • Top equipment kit, co-workings • Hybrid set of works (Office location: Warsaw) • Collaborative and innovative work environment • Career growth and development opportunities • A chance to work with giants of the sports industry About the project Our partner leads the industry in generating dynamic sports videos for every digital destination. Their cutting-edge AI and Machine Learning technologies analyse live sports broadcasts from over 250 leagues and broadcast partners, including iconic names like the NBA, NHL, ESPN, FIBA and Bundesliga, to create personalized, short-form videos in real-time. The solution empowers media rights owners to unlock new revenue streams and deliver a tailored fan experience across every digital platform. Join the high-profile Engineering team and discover the forefront of sports contents innovation.
Senior Azure Data Engineer | Hybrid from Warsaw
DCV Technologies
⚲ Warszawa
Wymagania
- SQL DW
- Databricks PySpark
- Azure
- Terraform
- SQL
- Python
- NoSQL
Senior Azure Data Engineer We are looking for a Senior Azure Data Engineer on behalf of our client, a well-known and fast-growing technology company offering a stable, long-term project in a modern environment. 📍 Work mode: Hybrid 1-2 days from Warsaw office 📄 Contract: B2B Required Key Skills: • Azure Data Engineer • Azure Data Engineer • Python • SQL and NoSQL databases • Databricks PySpark (Structured streaming batch) • Experience with Azure ADLS, Databricks, EventHub, Event Grid, SQL DW, Azure Functions, Serverless Architecture, Terraform (must theory, nice practical experience) Must Have: • Python • SQL and NoSQL databases • Databricks PySpark (Structured streaming batch) • Experience with Azure ADLS, Databricks, EventHub, Event Grid, SQL DW, Azure Functions, Serverless Architecture, Terraform (must theory, nice practical experience) Good to Have: • Experience with ARM Templates, Azure Data Factory • Knowledge on Kubernetes and Google Cloud Platform • Machine learning and AI Competencies: • Digital : Microsoft Azure Experience (Years): • 6-8 Essential Skills: • Required Key Skills Azure Data Engineer • Azure Data Engineer • Python • SQL and NoSQL databases • Databricks PySpark (Structured streaming batch) • Experience with Azure ADLS, Databricks, EventHub, Event Grid, SQL DW, Azure Functions, Serverless Architecture, Terraform (must theory, nice practical experience) Must Have: • Python • SQL and NoSQL databases • Databricks PySpark (Structured streaming batch) • Experience with Azure ADLS, Databricks, EventHub, Event Grid, SQL DW, Azure Functions, Serverless Architecture, Terraform (must theory, nice practical experience) Good to Have: • Experience with ARM Templates, Azure Data Factory • Knowledge on Kubernetes and Google Cloud Platform • Machine learning and AI ✅ Benefits: · Work with modern technologies and international teams · Long-term and stable cooperation · Professional growth opportunities 📩 If you’re interested and meet the qualifications, please send your CV to Alina Pchelnikova at alina.pchelnikova@dcvtechnologies.co.uk
Lead Data Engineer (GCP / PySpark)
Shimi Sp. z o.o.
⚲ Poland (Remote)
170 - 190 PLN/h netto (B2B)
Wymagania
- CI/CD
- PySpark
- GCP
For one of our clients, we are looking for a Lead Data Engineer (GCP / PySpark). Client Description The client is an international organization operating in the digital media and streaming platforms domain, delivering large scale digital solutions for millions of users. The company develops modern, cloud and data-driven solutions, with a strong focus on quality, stability, and system scalability. Teams work in a cross-functional environment with real impact on product and architectural development. Technical Requirements • Experience in a Senior / Lead Data Engineer role • Hands on experience with Google Cloud Platform (GCP) • Strong knowledge of PySpark • Experience in data engineering projects in a cloud environment • Participation in data platform migrations (including migrations between technologies/environments) • Knowledge of or experience in the following areas: o CloudOps / DevOps o automation, CI/CD o reliability, observability (nice to have) • Excellent communication skills – critical for this role • English at minimum B2+ / C1 level (UK based client) Scope of Responsibilities • Acting as a technical lead / leading contributor in the Data Engineering area • Organizing and structuring the team’s work • Removing day to day technical and process blockers • Supporting the team in technical and architectural decision making • Active communication with stakeholders and progress reporting • Contribution to the migration of the data platform to GCP Offer • Rate: up to 190 PLN/h net+VAT • Start: ASAP • B2B contract • 100% remote • Engagement: long-term – minimum 6 months, with a possibility of extension
Data Center Engineer
TQLO SPÓŁKA Z OGRANICZONĄ ODPOWIEDZIALNOŚCIĄ
⚲ Warszawa
20 000 - 30 000 PLN netto (B2B) | 18 000 - 24 000 PLN brutto (UoP)
Wymagania
- Cisco
- VMware
We are looking for an experienced Data Center Engineer who, thanks to strong expertise in Cisco ACI (Application Centric Infrastructure), will become a key pillar of our data center operations. Our Client is a fast-growing network technology leader focused on modern solutions and long-term, partner-based relationships. You will join a team of experts delivering advanced projects in designing and implementing modern network infrastructures for demanding business clients. 📍 WORK MODE: Remote work (with a visit to the Warsaw office once every 1–3 months). ⏱️ AVAILABILITY: Flexible – we will adapt to your notice period. 🧑💻 YOUR RESPONSIBILITIES • Designing, implementing, and maintaining advanced Data Center infrastructures based on modern architectures. • Conducting technical discussions with clients, gathering business requirements, and proposing optimal technical solutions. • Supporting the sales team during the pre-sales phase by preparing estimates, presentations, and comprehensive project documentation. • Configuring and administering SAN (Storage Area Network) environments using solutions from leading vendors. • Implementing internal network security policies, including ACLs (Access Control Lists) and microsegmentation processes. 🔍 WHAT WE EXPECT FROM YOU Must-have: • Minimum 4 years of experience working with Data Center infrastructure and strong expertise in Cisco ACI. • Solid knowledge of Spine-Leaf architecture, Fabric, EVPN (Ethernet VPN), and protocols such as BGP, OSPF, and VXLAN. • Hands-on experience with network and storage virtualization, especially VMware NSX and vSAN. • Strong analytical skills and readiness for occasional business trips for customer implementations. • Technical university degree (Computer Science, Telecommunications, or related field). Nice-to-have: • Experience in infrastructure automation using Ansible, Terraform, or Python. • Industry certifications from vendors such as Cisco, Brocade, Dell/EMC, or HPE. • Experience in delivering technical workshops for external clients. 🤝 WHY JOIN US? • Stable employment with an attractive salary aligned with your experience. • Remote-first work model with high autonomy and real influence on designed solutions. • A short and efficient recruitment process: one technical interview and a meeting with the Technical Director. • Opportunity to work with cutting-edge Data Center technologies across diverse client environments. • A partner-oriented culture based on open communication and strong team support. Thank you for all applications!We will contact selected candidates.
AVP, Wholesale Credit Risk
HSBC Technology Poland
⚲ Kraków
16 600 - 24 800 PLN (PERMANENT)
Wymagania
- SAS
- R
- SQL
- VBA
- Degree (nice to have)
- IFRS (nice to have)
- Testing (nice to have)
- Python (nice to have)
O projekcie: Some careers shine brighter than others. If you’re looking for a career that will help you stand out, join HSBC, and fulfil your potential. Whether you want a career that could take you to the top, or simply take you in an exciting new direction, HSBC offers opportunities, support and rewards that will take you further. Your career opportunity This role will be responsible for credit risk models development within Wholesale Credit Risk GRA team. The focus will be mainly on supporting AIRB methodology development with possible projects in stress-testing (ST) and IFRS 9 areas. Apart from model development as the dominating area of the work, your involvement in all other stages of risk model lifecycle might be required. Wymagania: What you need to have to succeed in this role - University degree in a quantitative or technical field. - Good understanding of statistics and familiarity with sophisticated tools for numerical analysis. - Minimum 4 years of credit risk modelling experience. - Good exposure to credit model methodologies and data requirement for A-IRB (preferable), IFRS 9 or stress testing. - Proven ability to take ownership of your work and solve complex modelling related issues. - Strong database and credit risk systems experience including coding in Python (preferable), SAS, R, SQL, VBA. - Good understanding and interpretation of regulatory rules. Codzienne zadania: - Work as part of model development teams developing new wholesale credit risk scorecards (PD, LGD, EAD) for global and regional portfolios. Support deployment and maintenance of live models. - Support or drive enhancement of existing credit risk scorecards in order to improve their performance or their applicability to other risk measurement or regulatory requirements. - Be involved in data improvement initiatives to support model development. - Participate in model usage forums, governance committees and technical panels. - Liaise with internal Independent Model Review team during model development, model monitoring and review processes. - Serve as people manager for small working groups if the team expands.
Senior ETL Developer (SAS)
Ness Solution
⚲ Warszawa
140 - 160 PLN/h netto (B2B)
Wymagania
- ETL
- SAS
- Oracle
Poszukujemy Senior ETL Developera (SAS), który będzie realizował projekt dla naszego klienta z branży ubezpieczeniowej. Miejsce pracy: WarszawaTryb pracy: Hybrydowy - 1 dzień w tygodniu w biurze Zakres prac: • Projektowanie/modelowanie kompleksowych rozwiązań z zakresu Hurtowni Danych i Business Intelligence w technologiach • SAS oraz Oracle, • Projektowanie i programowanie procesów ETL w Hurtowni Danych, • Przygotowanie dokumentacji powykonawczej oprogramowania dla developerów obszaru Hurtowni Danych, • Monitorowanie jakości oprogramowania poprzez wykonywanie Code Review, • Optymalizacja kodów, • Zapewnienie wysokiej jakości oprogramowania przez nadzór nad testami developerskimi i dokumentacją powykonawczą, • Wsparcie w zakresie testów wykonanego oprogramowania, • budowa technicznych data mart-ów oraz raportów BI (m.in. w SAS Viya). Wymagania: • Minimum 5-letnie doświadczenie w pracy z Hurtownią Danych, • Wykształcenie wyższe z zakresu informatyki lub matematyki, • Doświadczenie w projektowaniu i budowaniu systemów klasy DWH/BI, • Znajomość narzędzi SAS DI, EG, VA oraz języków programowania SAS 4GL, SAS Macro Language, SQL, Oracle, • Umiejętność proponowania efektywnych rozwiązań do wymagań zgłaszanych przez użytkowników, • Kompetencje analityczne oraz umiejętność pracy w zespole scrumowym.
Developer Hurtowni Danych/BI
speedapp
⚲ Warszawa
140 - 180 PLN/h netto (B2B)
Wymagania
- DWH
- ETL
- SAS
- Oracle
Wymagania: • Minimum 5-letnie doświadczenie w pracy z Hurtownią Danych, • Wykształcenie wyższe z zakresu informatyki lub matematyki, • Doświadczenie w projektowaniu i budowaniu systemów klasy DWH/BI, • Znajomość narzędzi SAS DI, EG, VA oraz języków programowania SAS 4GL, SAS Macro Language, SQL, Oracle, • Umiejętność proponowania efektywnych rozwiązań do wymagań zgłaszanych przez użytkowników, • Kompetencje analityczne oraz umiejętność pracy w zespole scrumowym, • Znajomość języka angielskiego umożliwiającą czytanie dokumentacji technicznej. Zadania: • Projektowanie/modelowanie kompleksowych rozwiązań z zakresu Hurtowni Danych i Business Intelligence w technologiach SAS oraz Oracle, • Projektowanie i programowanie procesów ETL w Hurtowni Danych, • Przygotowanie dokumentacji powykonawczej oprogramowania dla developerów obszaru Hurtowni Danych • Monitorowanie jakości oprogramowania poprzez wykonywanie Code Review, • Optymalizacja kodów, • Zapewnienie wysokiej jakości oprogramowania przez nadzór nad testami developerskimi i dokumentacją powykonawczą, • Wsparcie w zakresie testów wykonanego oprogramowania, • Budowa technicznych data mart-ów oraz raportów BI (m.in. w SAS Viya).
GCP Data Architect
Lingaro
⚲ Warszawa
Wymagania
- ETL
- GCP
- SQL
- Python
Growth through diversity, equity, and inclusion. As an ethical business, we do what is right — including ensuring equal opportunities and fostering a safe, respectful workplace for each of us. We believe diversity fuels both personal and business growth. We're committed to building an inclusive community where all our people thrive regardless of their backgrounds, identities, or other personal characteristics. Tasks: • Collaborate with stakeholders to understand business requirements and translate them into data engineering solutions. • Design and oversee the overall data architecture and infrastructure, ensuring scalability, performance, security, maintainability, and adherence to industry best practices. • Define data models and data schemas to meet business needs, considering factors such as data volume, velocity, variety, and veracity. • Select and integrate appropriate data technologies and tools, such as databases, data lakes, data warehouses, and big data frameworks, to support data processing and analysis. • Ensure that data engineering solutions align with the organization's long-term data strategy and goals. • Evaluate and recommend data governance strategies and practices, including data privacy, security, and compliance measures. • Collaborate with data scientists, analysts, and other stakeholders to define data requirements and enable effective data analysis and reporting. • Provide technical guidance and expertise to data engineering teams, promoting best practices, and ensuring high-quality deliverables. Support to team throughout the implementation process, answering questions and addressing issues as they arise. • Oversee the implementation of the solution, ensuring that it is implemented according to the design documents and technical specifications. • Stay updated with emerging trends and technologies in data engineering, recommending, and implementing innovative solutions as appropriate. • Conduct performance analysis and optimization of data engineering systems, identifying and resolving bottlenecks and inefficiencies. • Ensure data quality and integrity throughout the data engineering processes, implementing appropriate validation and monitoring mechanisms. • Collaborate with cross-functional teams to integrate data engineering solutions with other systems and applications. • Participate in project planning and estimation, providing technical insights and recommendations. • Document data architecture, infrastructure, and design decisions, ensuring clear and up-to-date documentation for implementation, reference, and knowledge sharing. What We're Looking For: • At least 6 years of experience as Data Architect, including min. 4 years of experience working with GCP cloud-based infrastructure & systems. • Deep knowledge of Google Cloud Platform and cloud computing services. • Strong experience in the Data & Analytics area. • Strong understanding of data engineering concepts, including data modeling, ETL processes, data pipelines, and data governance. • Expertise in designing and implementing scalable and efficient data processing frameworks. • In-depth knowledge of various data technologies and tools, such as columnar databases, relational databases, NoSQL databases, data lakes, data warehouses, and big data frameworks. • Knowledge of modern data transformation tools (such as DBT, Dataform). • Knowledge of at least one orchestration and scheduling tool. • Programming skills (SQL, Python, other scripting). • Tools knowledge: Git, Jira, Confluence, etc. • Experience in selecting and integrating appropriate technologies to meet business requirements and long-term data strategy. • Ability to work closely with stakeholders to understand business needs and translate them into data engineering solutions. • Strong analytical and problem-solving skills, with the ability to identify and address complex data engineering challenges. • Knowledge of data governance principles and best practices, including data privacy and security regulations. • Excellent communication and collaboration skills, with the ability to effectively communicate technical concepts to non-technical stakeholders. • Experience in leading and mentoring data engineering teams, providing guidance and technical expertise. • Familiarity with agile methodologies and experience in working in agile development environments. • Continuous learning mindset, staying updated with the latest advancements and trends in data engineering and related technologies. • Strong project management skills, with the ability to prioritize tasks, manage timelines, and deliver high-quality results within designated deadlines. • Strong understanding of distributed computing principles, including parallel processing, data partitioning, and fault-tolerance. What Will Set You Apart: • Certifications in big data technologies or/and cloud platforms. • Experience with BI solutions (e.g. Looker, Power BI, Tableau). • Experience with ETL tools: e.g. Talend, Alteryx • Experience with Apache Spark, especially in GCP environment. • Experience with Databricks. • Experience with Azure cloud-based infrastructure & systems. Missing one or two of these qualifications? We still want to hear from you! If you bring a positive mindset, we'll provide an environment where you feel valued and empowered to learn and grow. We offer: • Stable employment. On the market since 2008, 1500+ talents currently on board in 7 global sites. • “Office as an option” model. You can choose to work remotely or in the office. • Workation. Enjoy working from inspiring locations in line with our workation policy. • Great Place to Work® certified employer. • Flexibility regarding working hours and your preferred form of contract. • Comprehensive online onboarding program with a “Buddy” from day 1. • Cooperation with top-tier engineers and experts. • Unlimited access to the Udemy learning platform from day 1. • Certificate training programs. Lingarians earn 500+ technology certificates yearly. • Upskilling support. Capability development programs, Competency Centers, knowledge sharing sessions, community webinars, 110+ training opportunities yearly. • Grow as we grow as a company. 76% of our managers are internal promotions. • A diverse, inclusive, and values-driven community. • Autonomy to choose the way you work. We trust your ideas. • Create our community together. Refer your friends to receive bonuses. • Activities to support your well-being and health. • Plenty of opportunities to donate to charities and support the environment. • Modern office equipment. Purchased for you or available to borrow, depending on your location.
Data Platform Engineer
RITS Professional Services
⚲ Warszawa
23 750 - 38 600 PLN netto (B2B)
Wymagania
- AWS
- Apache Kafka
- SQL
- Python
🏢 About the CompanyWe collaborate exclusively with a stable US-based client, a global leader in electronic trading platforms that has operated for over 25 years. The company serves the world’s leading asset managers, central banks, hedge funds, and other institutional investors — facilitating around 30 trillion USD in trades every month across its electronic marketplaces. 📌 About the RoleYou will be building cutting-edge data platforms that ingest, manage, and process data from all of the company’s businesses. The platform will accommodate a wide range of use cases, from simple customer-facing data APIs to large-scale machine learning models. We offer: • Hardware & setup budget (e.g., standing desk, laptop, monitors, coworking space) • Optional integration trips (New York / London / Warsaw) — 3–4 days, covered by the company Responsibilities: • Build and run data platform using such technologies as public cloud infrastructure (AWS), Kafka, databases and containers • Develop data platform based on open source software and Cloud services • Build and run ETL pipelines to onboard data into the platform, define schema, build DAG processing pipelines and monitor data quality. • Help develop machine learning development framework and pipelines • Manage and run mission crucial production services We are looking for: • Senior level Python • Very good knowledge of AWS (or GCP), daily work in Prefect (you don't need to know it, if you know Airflow or similar, that's enough), Kafka, SQL, databases, GitLab, and k8s and Linux are on the production environment. Terraform - nice to have • English min. C1
Senior Data Analyst (Incogni)
Surfshark
⚲ Warszawa
19 600 - 25 000 PLN netto (B2B) | 19 600 - 25 000 PLN brutto (UoP)
Wymagania
- SQL
Incogni is at the forefront of consumer privacy-tech, making it easy for people to remove their personal information from data brokers. Since launching, we’ve quickly become the most popular data removal service among consumers, and our international team continues to grow across LT, PL, DE, and the US. As we scale, data is at the heart of our decision-making and innovation. Our Data team empowers colleagues across the company to monitor progress, plan effectively, and achieve ambitious goals by building robust ETLs, clear dashboards, and meaningful analytics. If you’re passionate about turning data into actionable insights and want to help drive our mission for stronger privacy, we’d love to meet you! If you want to: • Build robust, end-to-end analytics solutions that empower stakeholders to make smarter, faster decisions; • Design clean, scalable data models and reporting layers to ensure reliable reporting; • Push the boundaries of how data is used - challenge assumptions, uncover opportunities, and drive continuous improvement; • Translate complex data into clear, actionable visuals and narratives; • Ensure that data products (dashboards, models) are consistent with current principles, guidelines, and standards; • Drill down into the data to find specific insights and recommendations for management and colleagues; • Advocate for data literacy and enablement, helping others to self-serve and interpret data confidently. And you can check off: • Proven experience in an analytical role; • Advanced proficiency in SQL, with the ability to write performant and complex queries; • Expertise in creating compelling dashboards and visualizations using tools such as Tableau, Looker, Power BI, or similar; • Strong business acumen: ability to challenge vague requests, clarify impact, and guide stakeholders toward meaningful, data-driven decisions; • Excellent problem-solving and communication skills, with the ability to manage end-to-end projects; • Passion for data analysis, visualization, and technology; • Solid working knowledge of dbt (Data Build Tool) and/or Python is considered an advantage. Here’s the deal: • Growth and learning opportunities: time dedicated to learning, conferences, online learning platforms, and books for your professional development; • Health and wellness: we want you to feel and be your best. That's why we offer various benefits, from online workouts with a physical coach and sports card to health insurance and regular mental health checks; • Tools of your choice: choose the technical equipment and tools you need to do your best work; • Community and celebrations: get ready for long-lasting traditions such as yearly workation, Friday get-togethers, various team buildings and company celebrations; • Convenient commuting: traveling from point A to point B in a city like Warsaw is quite a journey! That’s why we compensate your public transport costs; • Work-life balance: We encourage working in the office, with the possibility of up to two remote days per week. Plus, with our WFA policy, you can work from almost anywhere in the world; • Premium Surfshark accounts: for you, your family, and friends!
Senior/Lead Data Infrastructure Engineer
N-iX
⚲ Kraków
7 000 - 8 900 USD netto (B2B)
Wymagania
- AWS
- Cortex
- AI
- Snowflake
- Python
We are seeking a highly senior, hands-on AI Data Infrastructure Engineer (potentially at a Lead level) to architect and own our institutional AI foundation. This is a specialized role at the intersection of Data Engineering and Software Engineering. Unlike a traditional AI Developer, your focus will be on the infrastructure, tooling, and ecosystem that powers AI, rather than building individual end-user solutions. You will modernize our data environment, making it "AI-ready," and ensure our platform is robust, scalable, and cost-optimized to support the next generation of online education and healthcare simulations. About the Client Our customer is a leader in online education, dedicated to empowering professionals through innovative simulation and learning platforms. We are a certified great workplace, ranked consistently by Fortune as a top employer for Millennials and Women. You will join the Data, AI & Automation (DAIA) team—a tight-knit, remote-first group of passionate experts driven by curiosity. We work in a fast-paced environment where we value "human-centric" AI that solves real-world problems in the healthcare and allied health fields. Responsibilities • Platform Ownership: Set up, maintain, and own the core AI platform infrastructure with a primary focus on Snowflake Cortex and its surrounding ecosystem. • Infrastructure as Code & Tooling: Configure and maintain MCP (Model Context Protocol) servers and manage the integration of open-source packages (e.g., Goose). • Cost & Performance Optimization: Actively manage Snowflake credits, token usage, and overall system performance to ensure a cost-effective and resilient environment. • Data Architecture: Modernize and refine high-level platform architecture, ensuring external datasets are seamlessly integrated and "AI-ready." • Observability: Implement and maintain high standards for system monitoring, observability, and reliability. • Technical Leadership: Act as a self-starting, independent lead who can translate high-level infrastructure needs into functional, production-grade code. Must-Have: • Expert-level Snowflake: Extensive hands-on experience with Cortex, including setup, management, and cost optimization. • Snowflake Suite: Deep expertise (SME) in Snowpark and Streamlit. • Programming: Advanced proficiency in Python and a strong background in Software Engineering. • AI Infrastructure: Proven experience in MCP (Model Context Protocol) server development and configuration. • Cloud & Data: • Deep understanding of data modeling, data architecture, and AWS environments (specifically AWS Bedrock). • Proficiency in core AWS infrastructure: S3 (data lakes), IAM (permissions/security), Lambda (serverless compute), and VPC/Networking (secure cloud connectivity). • Seniority: Minimum 5+ years of experience in data/infrastructure engineering, showing the ability to work independently and interface directly with internal technical stakeholders. Nice-to-Have: • GenAI/RAG: Practical experience deploying Generative AI and Retrieval-Augmented Generation (RAG) systems in a production setting. • Machine Learning: A background in ML engineering or MLOps (e.g., experience with AWS SageMaker). Open Source: Experience contributing to or managing open-source AI tooling like Goose.
Data Software Engineering Program
EPAM Systems
⚲ Poland (Remote)
Wymagania
- DWH
- Docker
- RDBMS
- Linux
- Kubernetes
- Java
- SQL
- Python
Are you seeking to develop industry-relevant skills and advance your career in IT? Apply for our program to gain practical knowledge and mentorship from EPAM's leading experts. Details If you have one year of hands-on experience with Python or Java and are striving to step into the world of Data Software Engineering (Big Data), this program is for you. Our learning path consists of two stages, offering you a step-by-step approach to gaining in-depth knowledge. Apply today to have the opportunity to: • Gain expertise in modern software development technologies • Dive into the capabilities of the leading public cloud platforms • Explore the design, development and maintenance of big data software systems • Learn large-scale data handling tools (Apache Spark, Kafka, Databricks, etc.) After successfully completing all program stages, you will gain market-oriented skills that you may further apply at EPAM or elsewhere in the IT industry. What do we offer? • Industry-focused education with EPAM, a leading software engineering company • Intensive, hands-on learning through numerous practical assignments • Free access to top-notch educational materials and resources • Regular guidance from seasoned mentors throughout your advanced training • Deep dive into your specialization and emerging technologies like GenAI and Cloud • Opportunity to join the EPAM team upon successful completion of all stages Training process The program is divided into two progressive stages, allowing you to gradually advance to more profound topics: • Stage 1: Fundamentals (~3 months, ~10-12 hrs/week) You will explore self-study materials at your convenience and complete assigned tasks with immediate automated feedback. To move forward, you will need to show good results and successfully pass four parts of the final test and technical interview. We will assess your knowledge of Git, Python, DevOps, CI/CD, DB, SQL, DWH and ETL. • Stage 2: Specialization (~3 months, ~20 hrs/week) We will offer you a mentoring program with one-on-one meetings with production experts. Your learning will become more intensive and practice-driven, with a focus on advanced concepts and tools. After completing the training, the best candidates will have the opportunity to interview for a Junior position and join the EPAM team in Poland, based on demonstrated skills and available opportunities in the company. What is required for training: General requirements: • Citizens of Poland and permanent residents who are eligible to work in this country • Second-to-last or final year university students and graduates • Individuals aged 18 years and older Skills requirements: • English level from B2 (Upper-Intermediate) and higher • Hands-on experience with Python or Java languages with a good theoretical background • Solid knowledge of Relational Database Management System (RDBMS) theory • Proficiency in Structured Query Language (SQL), including DML, DDL, DCL and TCL • Strong theoretical foundation in Data Warehousing (DWH) and Data Lakes • Good grasp of Linux, as well as containerization and orchestration technologies such as Docker and Kubernetes Nice to have: • Degree from a technical university or other educational institution with a technical specialization Please read this info before registration: • The program is available to Polish citizens and permanent residents who are 18 years of age or older, eligible to work full-time in Poland at the time of application. • We have designed this course for beginners in the field of Data Software Engineering. If you have relevant production experience, feel free to explore our career website for job opportunities. • This program offers continuous enrollment, allowing you to apply, pass the assessment and begin learning at any time, as long as registration remains available. • The recommended time to complete the training modules of the first stage is ~3 months. If we do not see progress during this period, your application will be automatically cancelled. • We strive to maintain a fair registration and testing process for everyone. If we detect any cheating, we will have to reject your application. • If you are interested in applying while enrolled in another EPAM Campus program, please discuss it with your training coordinator first.
Senior Data Engineer
Remodevs
⚲ Warszawa
5 500 - 7 700 USD netto (B2B) | 5 500 - 7 700 USD brutto (UoP)
Wymagania
- ETL
- SQL
- Python
Please note - this is a hybrid role, with 3 days per week working from our office in Warsaw.Overview We are a company that works with AI-powered enterprise operations. We provide digital products and consulting services that help businesses grow and change. We use modern technology to simplify processes, improve efficiency, and create new revenue opportunities, especially in private capital markets. Our ecosystem includes three main parts: • PaaS (Platform as a Service): an AI-based core platform that improves workflows, provides insights, and supports value creation across portfolios. • SaaS (Software as a Service): a cloud platform that delivers strong performance, smart features, and reliable execution at scale. • Solutions and Consulting: flexible technology solutions that help companies manage, grow, and improve performance. With more than 10 years of experience working with fast-growing companies and private-equity-backed businesses, we have strong industry knowledge and know how to turn technology into a business advantage. The Opportunity We are looking for a Cloud Data Engineer with experience in Python, DBT, SQL, and data warehousing. You will work on enterprise-level data systems and help prepare data for analytics and AI use. Responsibilities and Duties • Design, build, and run Python scripts and ETL processes using tools like Prefect and Airflow • Prepare data for analysis by transforming and organizing it • Design database schemas that support reporting, analytics, and AI use • Build cloud-based data solutions and architectures • Transform and migrate data using Python, DBT, and Pandas • Work with streaming and event-based technologies for real-time data processing • Load and transform structured, semi-structured, and unstructured data • Improve ETL performance to handle large data volumes • Monitor ETL jobs and fix performance or reliability issues • Follow best practices for data quality, security, and governance • Write complex SQL queries and work with existing data pipelines • Review code to ensure quality and follow standards • Support CI/CD and automated release processes • Validate, clean, and check data for errors Requirements • 3+ years of experience with Python, including Pandas • 5+ years of experience writing complex SQL queries • 5+ years of experience building ETL pipelines using Airflow, Prefect, or similar tools • Experience with cloud data warehouses such as RDS, Redshift, or Snowflake • Knowledge of data warehouse design, including OLTP, OLAP, dimensions, and facts • Experience with cloud-based data systems, messaging, and analytics • Bachelor’s degree in Computer Science or a similar field is preferred Nice to Have • Experience with Docker and Kubernetes • Experience with CI/CD automation • Experience with AWS Lambda or Step Functions • Knowledge of data partitioning • Experience with Databricks or PySpark • Cloud certifications Why Join Us? We value people who solve problems creatively, learn quickly, and enjoy working in an open and diverse team. We work hard, but we also believe in having fun and growing together.
RPA & AI Integration Specialist (m/f)
Antal Sp. z o.o.
⚲ Wrocław
Wymagania
- UiPath
- AI
- RPA
- JavaScript
- Python
Lokalizacje: cala Polska (hybryda)Typ zatrudnienia: Umowa o pracę | Pełny etatDoświadczenie: min. 3 lat Dla naszego Klienta poszukujemy doświadczonego specjalisty, który dołączy do zespołu automatyzacji procesów biznesowych, pracującego z najnowszymi technologiami RPA, AI i GenAI. \Zakres obowiązków • Tworzenie i wdrażanie rozwiązań automatyzacyjnych przy użyciu UiPath (w tym zaawansowanych modułów: Document Understanding, Communication Mining, Task Mining). • Integracja automatyzacji z różnorodnymi systemami biznesowymi. • Wykorzystanie AI i GenAI do rozszerzenia możliwości automatyzacji. • Bezpośrednia współpraca z klientami – analiza wymagań, doradztwo, wdrażanie rozwiązań. • Ciągłe śledzenie i wdrażanie nowych trendów technologicznych. Wymagania • Min. 3 lat doświadczenia komercyjnego z UiPath (mile widziane: Document Understanding, Task Mining, itp.). • Praktyczna znajomość rozwiązań opartych na AI i Generative AI. • Umiejętność programowania w Pythonie i/lub JavaScript. • Doświadczenie w integracji z systemami zewnętrznymi (np. ERP, CRM, API). • Komunikatywność i doświadczenie w pracy z klientem. • Otwartość na rozwój i nowe technologie. Co oferujemy • Zatrudnienie bezpośrednio przez klienta Antal. • Nastawienie na długofalową współpracę – długoterminowe projekty. • Zatrudnienie w oparciu o umowę o pracę. • Atrakcyjne wynagrodzenie + roczne premie. • Prywatna opieka medyczna (Medicover) + pakiety dodatkowe. • Ubezpieczenie na życie. • Platforma benefitowa. • Kursy, szkolenia i materiały edukacyjne. • Możliwość pracy hybrydowej. • Praca w międzynarodowym środowisku przy nowoczesnych projektach. Dodatkowe informacje: Praca w strukturach globalnej firmy technologicznej, wspierającej transformacje cyfrowe największych organizacji na świecie. Środowisko sprzyjające rozwojowi, stabilność zatrudnienia i realny wpływ na wybór projektów i kierunek kariery. Zainteresowana/y? Aplikuj przez formularz lub skontaktuj się z nami bezpośrednio. Co zyskasz dzięki aplikacji na ofertę Antal? Gdy Twoja aplikacja zostanie rozpatrzona pozytywnie (zostaniesz zaproszony/a do procesu), otrzymasz wsparcie Konsultanta/Konsultantki, który/a utrzyma z Tobą stały kontakt (mailowo lub telefonicznie), pomoże Ci przygotować się do rozmowy rekrutacyjnej z przyszłym pracodawcą oraz zatroszczy się o jakość procesu rekrutacyjnego, w którym aktualnie bierzesz udział. Kim jesteśmy? Jesteśmy liderem rekrutacji specjalistów i menedżerów oraz doradztwa w obszarze HR. Marka obecna jest w 35 krajach, w Polsce działa od 1996 roku. Przez ten czas zbudowaliśmy wiele karier kandydatów, dzięki elastycznemu i kompleksowemu podejściu do wszystkich rekrutacji. Antal tworzy ponad 130 profesjonalnych konsultantów ds. rekrutacji, którzy są oni nie tylko skutecznymi rekruterami, ale także wykwalifikowanymi doradcami, specjalizującymi się zarówno w zakresie wybranego sektora, jak i stanowiska.
Data Engineer
DCG
⚲ Warszawa
Wymagania
- Unity Catalog
- Spark
- Databricks
- Python
As a recruitment company, DCG understands that every business is powered by experienced professionals. Our management style and partnership approach enable us to meet your needs and provide continuous support. Due to our ongoing growth and the large number of recruitment projects we undertake for our partners, we are currently looking for: Data Engineer Responsibilities: • Develop and build an application to manage customer datasets, including product descriptions and sales figures • Design and implement a data lakehouse platform in Databricks to process data from the application, transforming raw customer datasets into standardized formats for mandatory reporting purposes • Collaborate within a small, agile 3-person data team responsible for developing the data platform. Key responsibilities include: • Coding in Spark and Python • Designing and creating data flows • Gathering and understanding business requirements • Conducting internal testing and ensuring data quality Requirements: • 8+ years of experience in data engineering, demonstrating deep technical expertise • Proactive and self-motivated, with a results-driven approach • At least 2 years of hands-on experience working with Databricks • Strong interpersonal skills, with the ability to communicate effectively and build relationships quickly • Proficiency in Spark, Python, PySpark, Databricks, Unity Catalog, and Delta Live Tables • Fluency in English is essential, as the role involves collaboration with UK-based teams Nice to have: • Polish language skills Offer: • Private medical care • Co-financing for the sports card • Training & learning opportunities • Constant support of dedicated consultant • Employee referral program
Administrator Baz Danych
DOZ S.A.
⚲ Olsztyn
Wymagania
- ETL
- Azure
- MariaDB
- MS SQL
- SQL
- Python
Twój zakres obowiązków: • Budowanie, rozwijanie i utrzymanie hurtownię danych • Zapewnianie spójności danych • Uzyskiwanie danych z baz danych za pomocą narzędzi ETL Oczekiwania: • Znajomość SQL na poziomie co najmniej średniozaawansowanym • Znajomość na poziomie średniozaawansowanym silników baz danych: MariaDB, MS SQL oraz techniki DM - (Dimensional Modeling) • Umiejętność budowania procesów ETL w: Azure Data Factory lub Pentaho • Znajomość podstaw programowania w języku Python • Znajomość w stopniu podstawowym Azure • Znajomość w stopniu podstawowym DataBricks’y
Head of Data & Quant Engineering
RedStone
⚲ Warszawa, Kraków, Gdańsk, Poznań, Wrocław
20 000 - 35 000 PLN netto (B2B)
Wymagania
- AWS
- Go
- Rust
- Python
🚀 Step into RedStone – the fastest-growing blockchain startup as a Head of Data! 👥 About us RedStone is a fast-growing Polish blockchain startup revolutionizing oracle infrastructure. With a team of 40, over half being senior engineers and technical experts, we deliver scalable, secure, and low-latency off-chain data to smart contracts across multiple chains. We’ve secured over $6B in TVS, raised $15M in Series A funding, and were recognized by Forbes as the top VC-backed startup in Poland. Backed by leading Web3 names like Arrington Capital, Stani Kulechov (Aave), and Gnosis, our team includes alumni from Google, OpenZeppelin, and major crypto projects. We work remotely across time zones, with a Warsaw HQ for deep work and collaboration.Watch a short video to meet our team. 🧠 About the role As Head of Data, you'll own the strategy and execution of RedStone’s data architecture. You’ll lead a high-performing technical team, define standards for data quality and reliability, and help deliver real-time intelligence for smart contracts across multiple chains and financial ecosystems. This is a high-impact leadership role at the intersection of DeFi, infrastructure, and big data. 💼 Responsibilities • Lead the Data Engineering and Analytics function across the company • Architect robust systems for collecting, validating, and aggregating off-chain and on-chain data • Define and enforce data reliability, accuracy, and uptime standards • Design monitoring systems to detect anomalies, manipulation attempts, and cross-source inconsistencies • Work cross-functionally with Engineering, Product, and Business teams to turn raw data into mission-critical insights • Grow and mentor a team of high-caliber data and backend engineers • Research and prototype advanced data products (e.g., asset pricing models, latency-adjusted feeds, cross-DEX aggregation) • Own the roadmap and delivery of the data infrastructure used by our oracle network ✅ You’ll bring • Strong backend engineering background with Python, Go, or Rust • Experience working with AWS (especially Lambda), event-driven architectures, and message queues like RabbitMQDeep knowledge of time-series databases (InfluxDB, TimescaleDB) and monitoring systems (Grafana, CloudWatch) • Familiarity with on-chain data structures, smart contract logs, block timing, and decentralized data fetching • Advanced understanding of financial or trading data, including anomaly detection, latency compensation, and pricing validation • Quality-Driven Mindset – extreme attention to detail; cares deeply about data integrity in high-stakes environments • Leadership & Mentoring – experience managing or mentoring technical teams, building culture, and scaling impact • Ownership & Execution – strong project management and execution skills, especially in unstructured and fast-moving environments 🌟 Bonus points • Experience with DEX protocols (Uniswap, GMX, dYdX, Curve, etc.) or DeFi lending platforms (Aave, Compound, Euler) • Knowledge of CEX infrastructure, order books, matching engines, and arbitrage mechanics • Understanding of liquidations, slippage, and pricing risk in lending/AMM environments • Contributions to Web3 projects, DAOs, or open-source blockchain tooling • Exposure to quantitative finance or financial modeling • Hands-on experience with smart contracts, oracles, and price feed architecture 🤝 What we offer • Fully remote or hybrid (Warsaw HQ at Zgoda 3) • Competitive salary + token allocation (after 3 months) • Private healthcare & Multisport card • Flexible hours & pre-agreed paid service breaks • Top-tier equipment (MacBook, external displays) • Regular team off-sites, hackathons, and Web3 conferences • Full ownership over your work • A rare opportunity to build foundational infrastructure for the future of finance 🔗 Useful Links • 🌐 Website • 🧠 Docs • 💻 GitHub • 🐦 X Be part of a world-class team building mission-critical Web3 infrastructure. Apply now and redefine the future of decentralized data.
Solution Architect (Data / AI / Azure)
TQLO SPÓŁKA Z OGRANICZONĄ ODPOWIEDZIALNOŚCIĄ
⚲ Warszawa
200 - 250 PLN/h netto (B2B)
Wymagania
- GenAI
- CI/CD
- AI
- Azure
Our Client is a mature technology organization delivering data-driven projects for international clients across various industries.You will join a team of architects supporting key initiatives in Data, Analytics, and AI, with a strong focus on Azure cloud–based solutions. We are looking for an experienced Solution Architect who combines a strong data background, a hands-on architectural approach, and confidence in client-facing discussions — including presales activities. 📍 WORK MODE: 100% remote 🧑💻 YOUR RESPONSIBILITIES • Leading end-to-end delivery of Data Integration, Analytics, and AI solutions in Azure — from concept to production • Designing complete solution architectures, making key technology decisions, and taking responsibility for their long-term impact • Supporting development teams throughout the full application lifecycle: POC, MVP, and full PROD deployments • Defining standards, frameworks, and strategies for the development and scaling of data/AI solutions • Working directly with clients during project delivery and presales activities • Actively participating in innovation initiatives, tech communities, and the development of AI competencies within the organization 🔍 WHAT WE EXPECT FROM YOU • Minimum 10 years of experience in IT, including at least 4 years as a Solution Architect • Very good knowledge of Azure cloud (data, analytics, AI/ML services) — a key requirement for this role • Strong background in Data Engineering, Data Integration, and Analytics • Experience designing solutions using Machine Learning and GenAI (Generative AI) • Practical knowledge of Business Intelligence tools, e.g. Power BI • Experience working directly with clients, including presales and situations requiring strong business maturity • Very good command of English (C1) — international environment Nice to have: • Knowledge of CI/CD, Terraform / IaC (Infrastructure as Code) • Technology certifications (especially Azure) • Experience in project or team management 🤝 WHY JOIN? • 100% remote work and real autonomy in how you operate • Participation in strategic Data & AI projects, one of the organization’s key growth areas • Stability and scale — on the market since 2008, 1500+ experts across 7 global locations • Comprehensive online onboarding with a dedicated Buddy from day one • Broad development opportunities: Udemy, certification programs, tech communities, internal AI initiatives • A culture of partnership, trust, and real influence on the technological direction of projects Employment Agency Registration Number (KRAZ): 33580
Database Administrator
Calimala.ai
⚲ Warszawa, Kraków, Wrocław, Poznań, Gdańsk
16 000 - 26 000 PLN netto (B2B)
Wymagania
- Installation, Configuration & Upgrades
- Incident Management & Operational Practices
- Security & Compliance (IAM, encryption, auditing)
- Backup, Recovery & High-Availability / DR
- Monitoring & automation tools
- Relational Databases
- Collaboration with Architects, Dev & Data Teams
- Performance Tuning & Query Optimization
- Cloud Database Services
- Schema Design & Index/Partition Strategy
Sitting close to the most critical systems, this role is about keeping data available, secure, and fast—no matter how demanding the workload. With a mix of hands-on database expertise and platform thinking, you’ll help clients run resilient environments that can support analytics, applications, and AI at scale. Database Administrator Calimala partners with enterprises across the Gulf and Europe to design, build, and scale Data & AI teams. As a Database Administrator, you’ll join a network of specialists who understand that uptime, performance, and integrity are non-negotiable—especially in regulated, data-intensive environments. This role sits at the core of our clients’ technology stack. You’ll manage and optimize database platforms across cloud and on-prem environments, support migrations and upgrades, and put in place the operational practices that keep systems stable as demands grow. What you'll be doing As a Database Administrator at Calimala, you’ll lead and support projects where database reliability is business-critical. One engagement may involve planning and executing a major upgrade or migration; another might focus on performance tuning, capacity planning, and hardening security in production environments. “We treat databases as living systems: they need to be observed, understood, and cared for—not just installed and forgotten.” You’ll work closely with architects, data engineers, and application teams to design schemas, optimize queries, and align environments with best practices. You’ll define standards for backups, monitoring, and incident response, ensuring that when issues arise, they’re detected early and resolved with minimal impact. Who we're looking for You’re an experienced DBA who enjoys both deep technical work and partnering with teams that depend on the systems you run. You’re comfortable owning production environments, making trade-offs explicit, and documenting your decisions so others can build on them. You’ve likely worked in enterprise or consulting settings supporting multiple applications and stakeholders. At Calimala, we value depth, accountability, and partnership—you take pride in stable systems, clear communication, and leaving environments in better shape than you found them. • Strong hands-on experience administering relational databases (e.g. SQL Server, Oracle, PostgreSQL, MySQL or similar) • Proven track record in installation, configuration, patching, and upgrading database instances • Solid skills in performance tuning, query optimization, and index/partition strategy • Experience designing and operating backup, recovery, and high-availability/DR solutions • Familiarity with security best practices: access control, encryption, auditing, and compliance requirements • Experience with monitoring and automation tools for database operations • Exposure to cloud database services (e.g. Azure SQL, AWS RDS/Redshift, GCP Cloud SQL/BigQuery) is a strong plus We’re looking for practitioners who see reliability as a craft: people who combine discipline with curiosity, and who understand that the best database work often shows up as the problems that never happen.
Data Architect (Azure)
TQLO SPÓŁKA Z OGRANICZONĄ ODPOWIEDZIALNOŚCIĄ
⚲ Warszawa
180 - 220 PLN/h netto (B2B)
Wymagania
- ETL
- Azure
- Data modeling
- Databricks
- Azure Services
- Apache Spark
Our Client is an international technology organization delivering advanced Data & Analytics projects for global partners.The project focuses on designing and developing a modern cloud-based data architecture, with a strong emphasis on scalability, data quality, and real support for business decision-making. We are looking for an experienced Data Architect who combines a data architecture mindset with a strong data engineering background and a willingness to work closely with both technical teams and business stakeholders. 📍 WORK MODE:100% remote 🧑💻 YOUR RESPONSIBILITIES • Collaborating with business and technical stakeholders to translate business needs into scalable cloud-based data engineering solutions • Designing and overseeing the end-to-end data architecture — from data models and pipelines to processing and analytics layers • Defining standards for Data Modeling, Data Governance, data quality, security, and regulatory compliance • Selecting and integrating technologies such as Apache Spark, Databricks, Azure Services, and Microsoft Fabric Data Engineering • Supporting data engineering teams through mentoring, code/design reviews, and solving complex technical challenges • Optimizing performance of data processing systems and ensuring their stability and long-term development in line with the overall data strategy 🔍 WHAT WE EXPECT FROM YOU • Experience in a Data Architect role with a strong background in data engineering • Very good knowledge of Azure Cloud and the Azure data services ecosystem (mandatory requirement) • Hands-on experience with Apache Spark, Databricks, ETL (Extract, Transform, Load), data pipelines, and data solution architectures • Proficiency in Python, PySpark, SQL, and large-scale data processing principles • Experience in Data Modeling & Design, Data Management & Governance, and CI/CD automation in data projects Nice to have: • Experience with other cloud platforms (AWS, GCP) as a complement to Azure expertise • Exposure to GenAI for Data Engineering • Experience with Microsoft Fabric Data Engineering • Experience working in international and distributed teams 🤝 WHY JOIN? • A stable, long-term project delivered for global clients in the Data & Analytics domain • 100% remote work with flexible working hours • International environment with real influence on architecture and technology decisions • Online onboarding with a dedicated Buddy from day one • Access to training, certifications, and learning platforms (including Udemy) with real support for expert-level growth Thank you for all applications! We will contact selected candidates. Employment Agency Registration Number (KRAZ): 33580
Architekt Danych / Data Architect
TSS
⚲ Warszawa
15 000 - 18 500 PLN netto (B2B)
Wymagania
- Apache Kafka
- ETL
- Power BI
- architektura danych
- ELT
- Agile
- SQL
- Big Data
- NoSQL
W TSS tworzymy najwyższej jakości rozwiązania z zakresu Software Development, FinTech i AI Solution. Tworzymy systemy płatnicze, bramki płatnicze online oraz rozwiązania umożliwiające innowacyjne procesowanie płatności. Nasze zespoły uczestniczą również w projektach wykonywanych dla klientów z wielu różnych branż i specjalizacji. Jeśli chcesz dołączyć do zespołu entuzjastów, dla których praca jest jednocześnie pasją, przygodą i możliwością rozwoju zawodowego dołącz do team’u TSS już teraz! Wymagania: • Doświadczenie zawodowe • Minimum 5 lat doświadczenia w projektowaniu architektury danych, zdobytego w ciągu ostatnich 8 lat, w tym:udział w co najmniej 2 projektach budowy hurtowni danych lub rozwiązań Big Data, obejmujących: projektowanie modeli danych (logicznych i fizycznych), • wdrażanie procesów ETL/ELT; • doświadczenie w pracy w roli architekta danych w projektach realizowanych metodyką zwinną (Agile); • tworzenie dokumentacji architektonicznej oraz standardów zarządzania danymi. • Wiedza i kompetencje techniczne Praktyczna znajomość narzędzi i technologii, takich jak: Apache Kafka, Apache Spark, Snowflake, Azure Synapse, Google BigQuery. • Znajomość relacyjnych i nierelacyjnych baz danych:• SQL, NoSQL. • Znajomość wzorców projektowych dla systemów przetwarzania dużych wolumenów danych. • Znajomość zasad ochrony danych osobowych, w tym ich anonimizacji i pseudonimizacji. • Doświadczenie w pracy z narzędziami do wizualizacji i analityki danych, takimi jak: • Power BI, Tableau. Co oferujemy? • Możliwość pracy w pełni zdalnej lub w biurze w Warszawie; • Stabilną współpracę na podstawie B2B; • Dofinansowanie do prywatnej opieki medycznej w PZU; • Wsparcie w rozwoju zawodowym - wewnętrzne szkolenia z zakresu cyberbezpieczeństwa;
Analityk BI (K/M)
Centrum Medyczne Puławska
⚲ Piaseczno
Wymagania
- Bazy Danych
- BI
- SQL
Główne zadania: • Współpraca z odbiorcami biznesowymi w celu doprecyzowania potrzeb analitycznych oraz interpretacji wyników. • Porządkowanie i opisywanie logiki biznesowej tworzonych raportów i wskaźników (w formie dokumentacji roboczej). • Projektowanie, rozwój i utrzymanie dashboardów oraz raportów BI dla wewnętrznych interesariuszy. • Budowa i utrzymanie modeli danych oraz logiki biznesowej (warstwa analityczna). • Praca z danymi pochodzącymi z wielu źródeł (integracja, walidacja, spójność). • Przygotowywanie analiz ad-hoc oraz cyklicznych raportów wspierających decyzje operacyjne i zarządcze. • Dokumentowanie tworzonych rozwiązań (logika biznesowa, źródła danych, zapytania). • Współpraca z zespołem inżynierii danych oraz biznesem w zakresie rozwoju rozwiązań BI. • Udział w projektach rozwojowych systemów raportowych i analitycznych. Wymagania: • Umiejętność jasnej i rzeczowej komunikacji z interesariuszami nietechnicznymi. • Gotowość do zadawania pytań i doprecyzowywania wymagań. • Otwartość na współpracę i przyjmowanie informacji zwrotnej. • Doświadczenie w pracy z narzędziami BI (np. Power BI, Qlik Sense, Tableau, Looker). • Praktyczna znajomość SQL (czytanie, tworzenie i modyfikacja zapytań). • Umiejętność pracy z modelami danych (relacje, miary, agregacje). • Dobra znajomość MS Excel (analiza danych, modele). • Umiejętność samodzielnej organizacji pracy i odpowiedzialności za powierzony obszar. • Zdolność jasnego komunikowania wniosków i założeń analitycznych. Mile widziane: • Znajomość DAX / Power Query. • Doświadczenie w dokumentowaniu rozwiązań analitycznych. • Doświadczenie w środowisku o wielu interesariuszach. • Podstawy modelowania predykcyjnego lub planistycznego. Oferujemy: • Realny wpływ na kształt rozwiązań analitycznych w organizacji. • Jasno zdefiniowany zakres odpowiedzialności. • Stabilne warunki współpracy i uporządkowane procesy decyzyjne. • Pakiet prywatnej opieki medycznej - dbamy o Twoje zdrowie, zapewniając dostęp do profesjonalnej opieki medycznej, • Dostęp do platformy benefitowej (w tym karta Multisport) – każdy znajdzie korzyści dostosowane do swoich potrzeb.
Data Scientists with Generative AI (mid/senior)
Yosh.AI
⚲ Warszawa, Kraków, Wrocław, Poznań, Gdańsk
10 000 - 20 000 PLN netto (B2B)
Wymagania
- NLP
- Git
- PyTorch
- NumPy
- Pandas
- Python
- Generative AI
- Data Science
Join Yosh.AI, Google Premier Partner and leading GenAI solution provider for Customer Experience in Europe, working with companies like Zalando, Orange, Metro, CCC, Medicover, eSky and many more across Poland and Europe. We are on the lookout for talented individuals to contribute to our mission of transforming customer experiences in the retail, banking, health and insurance sectors on a global scale. As Google Partner of the Years 2023 and 2024, we collaborate on numerous cutting-edge and R&D projects, setting industry standards in AI applications. If you're ready to make a significant impact and work with a team of passionate experts within our team and Google, Yosh.AI is your destination. Apply now to be a part of our award-winning journey and help us drive the AI revolution. We are excited to announce an open position for: Data Scientists with Generative AI (mid/senior) - Location: Warsaw/remote We are looking for a hands-on generative AI engineer to architect, build, and deploy the next generation of autonomous and agentic AI systems. In this role, you will bridge the gap between rapid prototypes and robust, production-ready solutions that solve complex business challenges for our enterprise clients. You will be at the forefront of developing innovative technologies on a global scale, leveraging the full power of Google's Generative AI stack, LangChain, and other advanced models to create solutions that think, reason, and act. This is a unique opportunity to own end-to-end projects and work with a team of passionate experts, in close collaboration with Google, to drive real-world impact. Key Responsibilities: • Architect and build enterprise-grade, agentic AI systems and conversational agents • Engineer and deploy scalable AI/ML solutions on Google Cloud Platform (GCP) • Develop and maintain serverless systems and containerized Python REST APIs • Leverage LLMs to perform deep analysis on structured and unstructured data, enhancing our big data analysis platforms • Drive innovation by rapidly prototyping new solutions and championing out-of-the-box thinking Required Experience: • Proven experience building and deploying AI/ML models in a production environment • Excellent programming skills in Python and experience building and deploying REST APIs • Hands-on experience with a major cloud platform (GCP, AWS, or Azure). • Practical experience with LLM orchestration frameworks • Deep knowledge of Generative AI concepts • Understanding of machine learning concepts • Proficiency with version control systems (Git) Highly Desirable: • Specific, in-depth experience with the Google Cloud Platform (GCP) AI/ML stack • Experience designing and building fully autonomous or agentic AI systems • Knowledge of technologies related to LLM models (e.g., LangChain, ADK, Vertex AI) • Practical experience with Conversational AI platforms (e.g., Google Dialogflow CX/Playbooks) • Experience with containerization technologies, specifically Docker Salary Range: 10.000-20.000 PLN gross per month (full time contract equivalent, depending on experience) We offer: • Opportunity for professional development in the area of GenAI • Private Medical insurance • Multisport card • Google certification paths • Hybrid or remote location - our office is located in the center of Warsaw • Cooperation with a great team of energetic and open-minded people