for batch and real-time data processing.
Build and optimize
ETL workflows
to collect, clean, and transform data from multiple sources.
Develop and manage
data warehouses, data lakes, and database systems
.
Ensure
data integrity, quality, and security
across all platforms.
Collaborate with business and analytics teams to design and implement data solutions.
Optimize query performance and system scalability for large datasets.
Implement
monitoring, alerting, and automation
for data workflows.
Integrate data from APIs, streaming sources, and third-party systems.
Document data processes, architecture, and data lineage.
Stay updated on emerging
data engineering tools and technologies
.
Requirements:
Bachelor's or Master's degree in Computer Science, Data Engineering, Information Systems, or related field.
5-10 years of experience in
data engineering, data architecture, or related roles
.
Strong proficiency in
SQL
and
data modeling
.
Hands-on experience with
ETL tools
(e.g., Apache Airflow, Talend, Informatica, AWS Glue).
Experience with
big data frameworks
(e.g., Apache Spark, Hadoop).
Proficiency in
Python, Scala, or Java
for data processing and automation.
Experience with
cloud data platforms
(AWS Redshift, Azure Synapse, Google BigQuery, Snowflake).
Familiarity with
data lake and warehouse architectures
.
Strong understanding of
data governance, security, and compliance
.
Excellent analytical, problem-solving, and communication skills.
Preferred Skills:
Experience with
Kafka, Kinesis, or other streaming technologies
.
Knowledge of
NoSQL databases
(MongoDB, Cassandra, DynamoDB).
Familiarity with
CI/CD pipelines
for data projects.
Experience with
Docker, Kubernetes
, or cloud-native data deployments.
* Certifications such as
AWS Certified Data Analytics
,
Google Professional Data Engineer
, or
Azure Data Engineer Associate
are an advantage.
Beware of fraud agents! do not pay money to get a job
MNCJobz.com will not be responsible for any payment made to a third-party. All Terms of Use are applicable.