AWS, Python, PySpark, SQL.
Data Bricks(Good to have)
Key Responsibilities
Design, develop, and maintain data pipelines and ETL processes using
PySpark
on distributed systems.
Implement scalable solutions on
AWS
cloud services (e.g., S3, EMR, Lambda).
Optimize data workflows for performance and reliability.
Collaborate with data engineers, analysts, and business stakeholders to deliver high-quality solutions.
Ensure data security and compliance with organizational standards.
Required Skills
AWS
: Hands-on experience with core AWS services for data engineering.
Python
: Strong programming skills for data processing and automation.
PySpark
: Expertise in distributed data processing and Spark framework.
SQL
: Proficiency in writing complex queries and optimizing performance.
Good to Have
Databricks
: Experience with Databricks platform for big data analytics.
Knowledge of CI/CD pipelines and DevOps practices.
Familiarity with data lake and data warehouse concepts.
Job Type: Contract
Contract length: 12 months
Pay: $8,000.00 - $10,000.00 per month
Benefits:
Health insurance
Work Location: In person
Beware of fraud agents! do not pay money to get a job
MNCJobz.com will not be responsible for any payment made to a third-party. All Terms of Use are applicable.