Job description
We are currently looking for a highly skilled Data Engineering Leader with a proven track record of designing and implementing high performance data pipelines as well as deep understanding of distributed systems, and a passion for exploring new technologies and approaches to solve complex data engineering problems.
We are looking for a natural leader, with the ability to inspire and mentor junior team members and provide thought leadership to improve our data engineering stack.
Responsibilities:
As a Senior Data Engineering Lead, you will be working closely with our data science and data platform teams to design and implement data pipelines to support analytics and machine learning models, with a focus on building high-performance real time systems. Your responsibilities will include, but not be limited to:
Leading the development and deployment of data pipelines that are scalable, reliable, and fault-tolerant, either for batch - using Trino, dbt, and Dagster, or real-time, with Kafka, Flink, and YugabyteDB.
Providing mentorship and technical leadership to junior data engineers.
Identifying opportunities to improve our data engineering stack and providing thought leadership on new technologies, tools, and approaches to enhance our data engineering stack as well as our data platform.
Contribute to the development of our enterprise feature store, improving data quality and feature reusability for data science models across the organization.
Requirements:
Bachelor\'s or Master\'s degree in Computer Science, Information Technology, or a related quantitative field.
At least 7 years of experience in hands-on data engineering.
Strong programming skills in at least one of the following languages: Python, Java, C++, or Scala.
Expertise in designing and implementing high performance real-time data pipelines using tools such as Kafka, Spark Streaming, Flink, or equivalent.
Strong understanding of modern data lake query engine, such as Presto/Trino/Athena, BigQuery, or similar.
Knowledge of modern data transformation tools (e.g. dbt).
Hands-On experience with distributed data platforms (e.g. Hadoop).
Understanding of modern container-based platforms (e.g. Kubernetes) and their use as part of a big data stack.
Experience with low latency databases (e.g. Redis, YugabyteDB, or others) would be beneficial.
Understanding of Machine Learning and Data Science is not required, but would be helpful in excelling in this role.
Curiosity and passion towards the data engineering ecosystem would be essential, as well as the tendency of keeping updated with the latest technology advancements in the field.
Please contact Jackie Panopio at +65 6950 0369 or JackieP@charterhouse.com.sg for a confidential discussion.
REFERRALS ARE GREATLY APPRECIATED.
EA License no: 16S8066 | Reg no.:R1332082
MNCJobz.com will not be responsible for any payment made to a third-party. All Terms of Use are applicable.