Senior Big Data Engineer (AWS + Databricks)
SoftServe
⚲ Remote
19 800 - 23 500 PLN (B2B)
Wymagania
- AWS
- Python
- Databricks
Opis stanowiska
O projekcie: WE ARE SoftServe is a global digital solutions company headquartered in Austin, Texas, founded in 1993. Our associates work on 2,000+ projects with clients across North America, EMEA, APAC, and LATAM. We are about people who create bold things, make a difference, have fun, and love their work. Big Data & Analytics is the Center of Excellence's data consulting and data engineering branch. Hundreds of data engineers and architects nowadays build data & analytics end-to-end solutions from strategy through technical design and proof of concepts to full-scale implementation. We have customers in the healthcare, finance, manufacturing, retail, and energy domains. We hold top-level partnership statuses with all the major cloud providers and collaborate with many technology partners like AWS, GCP, Microsoft, Databricks, Snowflake, Confluent, and others. TOGETHER WE WILL - Address different business and technology challenges, engage in impactful projects, use top-notch technologies, and drive multiple initiatives as a part of the Center of Excellence - Support your technical and personal growth – we have a dedicated career plan for all roles in our company - Investigate new technologies, build internal prototypes, and share knowledge with the SoftServe data community - Upskill with full access to Udemy learning courses - Pass professional certifications, encouraged and covered by the company - Adopt best practices from experts while working in a team of top-notch engineers and architects - Collaborate with world-leading companies and attend professional events Wymagania: IF YOU ARE - A professional with 4+ year track record in the Big Data field - Proficient in Python (PySpark) and SQL - Experienced in developing data solutions on the AWS cloud platform - Specialising in big data engineering with a focus on data pipeline creation - Adept at batch and streaming processing - Hands-on with Databricks on AWS, building and managing data pipelines - Skilled in Apache Spark and Databricks, including working with large-scale data processing - Familiar with Delta Lake and core Databricks components (Workflows, Jobs) - Familiar with orchestration tools like Apache Airflow or Managed Apache Airflow (MWAA) - Knowledgeable about Apache Kafka, Amazon MSK (Managed Streaming for Apache Kafka), or Kinesis for data streaming Codzienne zadania: - Be part of a team of data-focused engineers dedicated to continuous learning, improvement, and daily knowledge-sharing - Work with a cutting-edge technology stack, including services from major cloud providers that are at the forefront of innovation and may not yet be generally available - Engage with customers from diverse backgrounds, including large global corporations to emerging crypto startups, preparing to launch their first product - Be involved in the entire project lifecycle, from initial design and proof of concepts (PoCs) to minimum viable product (MVP) development and full-scale implementation