Data Engineer

SG, Singapore

Job Description

Are you passionate about building great products? Do you want to redefine the way travellers explore the world? Keen to be part of this growth journey with a bunch of amazing people? Then Pelago is the place for you!





We are looking for ambitious and motivated talents who are excited about staying on the cutting edge of Technology and always keen on innovating new ways to drive growth and taking our startup to new heights.





WHO ARE WE?



Pelago is a travel experiences platform created by Singapore Airlines Group. Think of us as a travel magazine that you can book - highly curated, visually inspiring, with the trust and quality of Singapore Airlines. We connect you with global, local cultures and ideas so you can expand your life.





We are a team of diverse, passionate, empowered, inclusive, authentic and open individuals who share the same values and strive towards a common goal!





WHAT CAN WE OFFER YOU?



A unique opportunity to take end-to-end ownership of your workstream to deliver real value to users. Platforms to solve real user problems concerning travel planning & booking with innovative products/services. An amazing peer group to work with, and the ability to learn from the similarly great minds around you. An opportunity to be an integral part of shaping the company's growth and culture with a diverse, fun, and dynamic environment with teammates from different parts of the world.
- Competitive compensation and benefits - including work flexibility, insurance, remote working and more!





WHAT YOU WILL BE DOING IN THE ROLE?



We're looking for a motivated Data Engineer who can independently build and support both real-time and batch data pipelines. You'll be responsible for enhancing our existing data infrastructure, providing clean data assets, and enabling ML/DS use cases.





Responsibilities:


Develop and maintain

Kafka streaming

pipelines and batch ETL workflows via

AWS Glue (PySpark)

. Orchestrate, schedule, and monitor pipelines using

Airflow

. Build and update

dbt

transformation models and tests for Redshift. Design, optimize, and support data warehouse structures in

Redshift

. Leverage

AWS ECS

,

Lambda

, Python, and SQL for lightweight compute and integration tasks. Troubleshoot job failures, data inconsistencies, and apply hotfixes swiftly. Collaborate with ML/DS teams to deliver feature pipelines and data for modeling. Promote best practices in data design, governance, and architecture.


Tech Stack:


Streaming & Batch: Kafka, AWS Glue (PySpark), Airflow Data Warehouse & Storage: Redshift, dbt, Python, SQL Cloud Services: AWS ECS, Lambda Others: Strong understanding of data principles, architectures, processing patterns


WHAT EXPERTISES ARE MUST HAVES FOR THE ROLE?



3-5 years in data engineering or similar roles. Hands-on experience with Kafka, AWS Glue (PySpark), Redshift, Airflow, dbt, Python, and SQL. Strong foundation in data architecture, modeling, and engineering patterns. Proven ability to own end-to-end pipelines in both real-time and batch contexts. Skilled in debugging and resolving pipeline failures effectively.


WHAT EXPERTISES ARE GOOD TO HAVE?



Production experience with

AWS ECS

and

Lambda

. Familiarity with ML/DS feature pipeline development. Understanding of data quality frameworks and observability in pipelines. AWS certifications (e.g., AWS Certified Data Analytics).


If you're as excited as we are in this journey, do apply directly with a copy of your full resume. We'll reach out to you as soon as we can!

Beware of fraud agents! do not pay money to get a job

MNCJobz.com will not be responsible for any payment made to a third-party. All Terms of Use are applicable.


Related Jobs

Job Detail

  • Job Id
    JD1556575
  • Industry
    Not mentioned
  • Total Positions
    1
  • Job Type:
    Full Time
  • Salary:
    Not mentioned
  • Employment Status
    Permanent
  • Job Location
    SG, Singapore
  • Education
    Not mentioned