pipelines and data architectures.
Integrate data from multiple sources into centralized
data warehouses
or
data lakes
.
Optimize data storage, transformation, and retrieval processes for performance and scalability.
Collaborate with cross-functional teams (Data Science, Analytics, BI, and Engineering) to deliver data solutions.
Implement data governance, quality, and security best practices.
Automate data workflows and monitor data pipeline performance and reliability.
Develop and maintain
data models
, schemas, and metadata documentation.
Work with cloud data services (AWS, Azure, or GCP) to manage and deploy data solutions.
Troubleshoot data issues, perform root cause analysis, and ensure high data integrity.
Continuously evaluate new technologies to improve data engineering processes.
Required Skills & Qualifications:
Bachelor's or Master's Degree
in Computer Science, Information Technology, or related field.
5-7 years
of hands-on experience in Data Engineering or a related role.
Strong proficiency in
SQL
and experience with relational and non-relational databases (e.g., PostgreSQL, MySQL, MongoDB, Cassandra).
Expertise in
data pipeline tools
such as
Apache Airflow, Kafka, Spark, NiFi, or Talend
.
Strong programming skills in
Python, Java, or Scala
for data manipulation and automation.
Experience with
big data frameworks
(Hadoop, Spark, Hive, HBase).
Proven experience in
cloud platforms
(AWS Glue, Redshift, S3, Azure Data Factory, GCP BigQuery, etc.).
Hands-on experience with
data modeling
,
ETL design
, and
data warehouse architecture
.
Familiarity with
DevOps
,
CI/CD pipelines
, and
containerization (Docker, Kubernetes)
is a plus.
* Strong analytical and problem-solving skills with attention to detail.
Beware of fraud agents! do not pay money to get a job
MNCJobz.com will not be responsible for any payment made to a third-party. All Terms of Use are applicable.