Senior Data Engineer with AWS, Python (FastAPI)
DataArt
⚲ Lublin, Wrocław, Warszawa, Kraków, Łódź
20 000 - 24 500 PLN netto (B2B) | 17 000 - 20 000 PLN brutto (UoP)
Wymagania
- AWS
- Master Data Management
- PostgreSQL
- Docker
- Data Quality
- Python
Opis stanowiska
Client Our client is a leading legal recruiting company aiming to build a data-driven platform specifically designed for lawyers and law firms. The platform brings everything together in one place — news and analytics, real-time deal and case tracking from multiple sources, firm and lawyer profiles enriched with cross-linked insights, rankings, and more. Project overview The platform aggregates data from hundreds of public sources including law firm websites, deal announcements, legal databases, and media publications creating a unified ecosystem of structured and interconnected legal data. It combines AI-driven enrichment, automated data processing, and scalable infrastructure to ensure comprehensive and reliable coverage of the legal market. Position overview We are seeking a Senior Data Engineer to join our team to design, build, and scale robust data pipelines for collecting, transforming, and structuring large volumes of legal and financial data collected via scrapers. You will collaborate closely with AI/ML engineers, DevOps, Front-end and Back-end teams to ensure smooth and efficient data workflows integral to the platform. Responsibilities • Design and implement data ingestion pipelines to collect and process structured and unstructured data from multiple online sources (web scraping, APIs, feeds, etc.). • Develop and optimize ETL/ELT workflows using Python and SQL. • Build and orchestrate scalable data workflows leveraging AWS services such as Batch and S3. • Develop and deploy internal data APIs and utilities supporting platform data access and manipulation. • Implement robust text extraction and parsing logic to handle diverse data formats. • Ensure data quality through validation, deduplication, normalization, and lineage tracking across Raw ➝ Curated ➝ Enriched data layers. • Containerize and orchestrate data workloads using Docker and native AWS solutions. • Collaborate closely with AI, Back-end, and Front-end teams to ensure efficient data integration and flow. Requirements • Experience with AWS services (AWS Batch, S3, Step Functions) • Data Quality experience • AWS Batch and Amazon S3 • AWS Step Functions • Amazon SQS • Master Data Management (MDM) experience • Relational databases, specifically PostgreSQL • Proven expertise in Python programming • Solid understanding of the AWS ecosystem • Practical experience with Docker and containerized development workflows • Experience with web scraping, text extraction, or other data‑ingestion techniques from diverse online sources • Strong analytical mindset, effective communication skills, and ability to collaborate across multiple teams Nice to have • Hands-on experience with Apache Spark and SQL for distributed data processing. • Experience with EMR, SageMaker.