4 to 8 years of experience in analysis, design, development & maintenance of quality applications in a big data process by using Hadoop concepts and framework.
At least, 2 to 4 years hands on experience in Hadoop and working knowledge of Hive, Spark, Sqoop and any database like Teradata/DB2/Oracle/ETL tools.
Expertise in HDFS architecture, Hadoop framework - Map Reduce, Hive, PIG, Sqoop, Flume & data warehouse concepts
CI/CD experience (Jenkins, GitHub) is a must
Proficient in Hadoop MapReduce programming and Hadoop ecosystem.
Good understanding of RDBMS principles, Shared nothing, MPP architecture.
Hands on performance tuning, query optimization, file and error-handling, restart mechanism
Hands on experience on UNIX Shell Scripting and python
Good communication skills and should be able to collaborate effectively with team for deliverables
Hands on experience with container related technologies like OpenShift, Docker, Kubernetes
Working knowledge of Unix/Linux administration, Bash Scripting.
Hands on experience on how to monitor software and Infrastructure and its related tools such as Grafana and ELK