4h ago
Staff Data Engineer
Canada; Europe
full-timesenior Remotehospitality
Tech Stack
Description
You will design and implement large-scale distributed data processing systems using Apache Hadoop, Spark, and Flink, building robust data pipelines and infrastructure that transform complex data into actionable insights for a platform processing billions in bookings annually.
Requirements
- Deep knowledge of data architecture, ETL/ELT pipelines, and distributed systems
- Strong skills in writing clean, maintainable code in Python, SQL, or Scala
- Experience with containerization (Docker, Kubernetes) and streaming technologies (Kafka, Confluent)
- Ability to work cross-functionally and mentor junior team members
- Strong sense of responsibility for data accuracy, lineage, and security
Responsibilities
- Design and implement large-scale distributed data processing systems using Apache Hadoop, Spark, Flink
- Build robust data pipelines and infrastructure for scalability and fault-tolerance
- Architect data lakes, warehouses, and real-time streaming platforms
- Implement security measures and optimize performance
- Evaluate new technologies to continuously improve the data ecosystem
0 views 0 saves 0 applications