University Graduate in Computer Science or Related discipline
5-7 years of experience in developing, maintaining and supporting Enterprise Class Big Data Applications, preferably in banking environments
Expertise in Big Data technologies/tools/platforms such as Hadoop, HDFS, Hive, Impala, HDFS, Presto, Spark, Hive, Impala, Zeppelin, Yarn, Cloudera, Hortonworks
Experience in Relational Databases
Knowledge and experience in coding and performance tuning of Spark jobs. Java Spark preferred but not compulsory
Establish and adhere to best practices relating to Apache Spark programming
Responsible for understanding functional/non-functional requirements and implement them in Apache Spark
Experience in usage of CI/CD tools Jenkins, Git, Bitbucket
Experience in writing Linux basic Shell Scripts
Knowledge and experience in Apache Spark batch and streaming framework
Knowledge of working with different file formats like Parquet, ORC, AVRO and JSON
Knowledge of Micro Services architecture
Knowledge of S3 is added advantage
Ability to work proactively, independently and with cross-functional and cross-regional teams
Strong communication and analytical skills and experience in Agile projects