🚀 Cloud Data Architect (m/k) 🚀
Team Up Recruitment
⚲ Katowice, Kraków, Wrocław, Opole, Łódź
Wymagania
- ETL tools
- Data architecture
- LakeHouse
- Cloud (AWS preffered)
- SQL
- Python
Opis stanowiska
💼 Role: Data Architect 🌍 Location: Hybrid from Katowice (1 day/week in the office) 📆 Start: ASAP 📄 Contract: UoP We’re looking for a hands-on Data Architect for our client to build a next-generation cloud data and AI platform. This role was created as part of a broader business and technology transformation, as the company is simplifying its stack. You will own key architectural decisions and set the long-term direction for the platform. ➡️ You will lead the architecture of an AWS lakehouse and work closely with cloud, network, and product teams to support the evolution toward a full Network-as-a-Service ecosystem, with real long-term impact on the platform’s future. 🔧 What you’ll do: • Define and evolve the end-to-end data platform architecture, including ingestion, lakehouse/warehouse, orchestration, governance, observability, and AI enablement • Design scalable ELT/ETL pipelines across systems such as Salesforce, backend platforms, CMDB, network telemetry, and external partners • Define standards for raw, refined, and curated data layers, ensuring data quality, lineage, and reprocessability • Introduce CDC, schema evolution, and automated ingestion patterns • Design governance and metadata architecture, including catalogues, lineage, and ownership • Enable AI/ML and automation use cases by building strong, production-ready data foundations • Act as the technical authority for data architecture decisions across teams ✅ Requirements: • Strong hands-on experience building modern cloud-native data platforms (AWS preferred) • Experience in lakehouse or data warehouse architectures (e.g. Redshift, Snowflake, Databricks) • Experience with modern lakehouse table formats (e.g., Apache Iceberg, Delta Lake, or Apache Hudi) • Hands-on experience with ELT/ETL and orchestration (e.g. Airbyte, dbt, Spark, Python, Airflow, Dagster, Prefect) • Strong SQL and Python skills • Experience working with structured, semi-structured, and streaming data • Knowledge of data governance, catalogues, lineage, and metadata management (e.g. DataHub or similar) • DevOps mindset with Git and everything-as-code practices • Ability to design scalable architectures supporting real-time workflows and event-driven patterns 🌟 Offer: • International work environment • Stable, long-term project • 2-stage recruitment process • Strong learning and development support 📩 Interested? Send us your CV and let’s talk! 😊