to join our growing data team. This role is open to
fresh graduates
and
early-career professionals (1-2 years of experience)
who are excited about building data pipelines, transforming raw data into meaningful insights, and working with modern data technologies. You'll collaborate with data analysts, software engineers, and product teams to ensure that data flows smoothly across our systems and is reliable, secure, and accessible.
Whether you're just starting out or already have some experience, this is a great opportunity to develop your data engineering skills and contribute to impactful, data-driven decision-making.
Key Responsibilities
Design, develop, and maintain scalable
data pipelines
and
ETL/ELT workflows
Clean, transform, and optimize raw data for storage and analysis
Work with structured and unstructured data from various sources (databases, APIs, files, etc.)
Ensure data quality, accuracy, consistency, and availability
Support data infrastructure (e.g., data lakes, data warehouses) and performance tuning
Collaborate with analysts, data scientists, and backend teams
Document data models, processes, and technical decisions
What You'll Learn
Real-world data engineering with modern tools like
Apache Spark/Flink
,
Kafka
,
Airflow
Working with
SQL/NoSQL databases
,
data lakes
, and
cloud platforms
(AWS, GCP, Azure)
Building batch and streaming data pipelines
Data modeling, warehousing
Orchestration and monitoring of data workflows
Best practices in
data governance
,
privacy
, and
security
Collaboration in agile, cross-functional teams with product, engineering, and analytics
Qualifications
#
For Fresh Graduates
Bachelor's degree in Computer Science, Data Engineering, Software Engineering, or a related field (or graduating soon)
Understanding of SQL and at least one programming language (Python preferred)
Exposure to data concepts through coursework, internships, or projects
Eagerness to work with large datasets and cloud-based data platforms
Willingness to learn new tools and follow team best practices
#
For 1-2 Years Experience
1-2 years of experience in data engineering, backend development, or analytics engineering
Proficient in SQL and Python
Familiar with ETL tools, data pipeline design, and version control (Git)
Experience with cloud services (e.g., S3, Lambda, Cloud Functions, or GCP Dataflow)
Able to troubleshoot data issues and build scalable data solutions
Nice to Have (For All Levels)
Experience with data orchestration tools (Airflow, Prefect, Dagster, etc.)
Familiarity with big data tools (Spark, Kafka, Hadoop)
Exposure to data visualization tools (e.g., Looker, Tableau)
Understanding of CI/CD, containerization (Docker), and infrastructure-as-code
Contributions to personal or open-source data projects
Knowledge of data privacy and compliance (GDPR, HIPAA, etc.)
Soft Skills
Analytical mindset and strong attention to detail
Team player with good communication skills
Open to feedback and continuous improvement
Responsible and proactive in solving data challenges
Eagerness to explore new tools and share knowledge
Bilingual in English and Chinese to understand Chinese technical requirements and liaise with Chinese speaking counterparts.
What We Offer
Structured onboarding and mentorship to grow your data skills
Opportunities to work on real-world data systems with production impact
A collaborative, knowledge-sharing team culture
* Clear growth paths toward analytics engineering, senior data engineering, or data platform roles
Beware of fraud agents! do not pay money to get a job
MNCJobz.com will not be responsible for any payment made to a third-party. All Terms of Use are applicable.