Develop and maintain ETL processes for finance regulatory reporting projects and applications.
Develop Big Data applications using the Agile Software development life cycle.
Enhance existing applications based on mapping specifications provided by the Tech BA.
Collaborate with the Tech BA and Scrum Master on project delivery and issue resolution.
Contribute to the coding, testing, and Level 2/3 support of the data warehouse.
Partner with business stakeholders to ensure requirements are met and collaborate with other technology teams (QA, Production Support) for effective implementation.
Provide technical expertise to assist in designing, testing, and implementing software code and infrastructure to support data infrastructure and governance activities.
Troubleshoot and resolve technical problems in applications or processes, providing effective solutions.
Perform performance tuning using execution plans and other relevant tools.
Continuously explore and evaluate evolving tools within the Hadoop ecosystem and apply them to relevant business challenges.
Analyze and debug existing shell scripts, and enhance them as needed.
Must have a background in Spark development
Required Skills:
3+ years experience as a Data Engineer with expertise in Spark development
Bachelor's degree in Computer Science or a related field.
Experience with Spark, HDFS, MapReduce, Hive, Impala, Sqoop, and Linux/Unix technologies.
Hands-on experience with RDBMS technologies (e.g., Oracle, MariaDB).
Strong analytical and problem-solving skills.
Experience working with a Big Data implementation in a production environment.
Familiarity with Unix shell scripting.
* Understanding of Agile methodology and experience working in an Agile development environment.
Beware of fraud agents! do not pay money to get a job
MNCJobz.com will not be responsible for any payment made to a third-party. All Terms of Use are applicable.