JustJoin.IT Praca zdalna Senior

Data SRE/ Data Infrastructure Engineer

RITS Professional Services

⚲ Warszawa

45 - 50 USD/h netto (B2B)

Wymagania

  • Apache Kafka
  • ETL
  • Linux
  • Kubernetes
  • Amazon AWS
  • Python
  • Apache Spark

Opis stanowiska

🔍 Key Info 📍 Work mode: 100% Remote in NYC time zone (at least 5/6 hours of overlap with NYC)🕒 Contract type: B2B💼 Experience level: Senior📆 Project length: Long-term cooperation   🏢 About the CompanyWe collaborate exclusively with a stable US-based client, a global leader in electronic trading platforms that has operated for over 25 years. The company serves the world’s leading asset managers, central banks, hedge funds, and other institutional investors — facilitating around 30 trillion USD in trades every month across its electronic marketplaces.   📌 About the Role We are looking for a Data Platform Engineer to help build and operate the core infrastructure that powers our data ecosystem. This role focuses on building tools, services and frameworks for data engineers and data scientists, rather than developing individual ETL pipelines. You will work on the platform layer of the data stack, designing and maintaining distributed systems, data processing frameworks and the infrastructure that enables large-scale data processing. You will collaborate closely with data engineers, data scientists and product teams to develop a reliable, scalable and production-grade data platform used across the organization.   Job responsibilities: • Build and run data platform using such technologies as public cloud infrastructure (AWS and GCP), Kafka, Spark, databases and containers • Develop data platform based on open source software and Cloud services • Build and run ETL tools and frameworks to onboard data into the platform, define schema, build DAGprocessing pipelines and monitor data quality. • Help develop machine learning development framework and pipelines • Manage and run mission crucial production services.   Skills: • Experience as Data Platform Engineering or SRE • Strong software engineering experience and working with Python. • Experience building ETL and stream processing tools and frameworks using Kafka, Spark, Flink, Airflow/Prefect, etc. • Strong experience working with SQL and databases/engines such as MySQL, PostgreSQL, SQL Server, Snowflake, Redshift, Presto, etc. • Experience with using AWS/GCP (S3/GCS, EC2/GCE, IAM, etc.), Kubernetes and Linux in production. • Strong proclivity for automation and DevOps practices • Experience with managing increasing data volume, velocity and variety Agile, self-starter and is focused on getting things done. • Strong communicator Nice to have: • Familiarity with data science stack: e.g. Jupyter, Pandas, Scikit-learn, Dask, Pytorch, MLFlow, Kubeflow, etc. • Financial Services experience