We are seeking a Hadoop Administration professional with at least 2 years of experience in implementing and administering Hadoop infrastructure. The ideal candidate will have a strong understanding of Hadoop, MapReduce, HBase, Hive, Pig, and Mahout, as well as experience working with Cloudera Manager or Ambari, Ganglia, and Nagios.
Requirements
- At least 2 years of experience in Implementation and Administration of Hadoop infrastructure
- At least 2 years of experience in Project life cycle activities on development and maintenance projects
- Operational expertise in troubleshooting, understanding of system's capacity, bottlenecks, basics of memory, CPU, OS, storage, and networks
- Hadoop, MapReduce, HBase, Hive, Pig, Mahout
- Hadoop Administration skills: Experience working in Cloudera Manager or Ambari, Ganglia, Nagios
- Experience in using Hadoop Schedulers - FIFO, Fair Scheduler, Capacity Scheduler
- Experience in Job Schedule Management - Oozie or Enterprise Schedulers like Control-M, Tivoli
- Good knowledge of Linux
- Exposure to setting up Ad/LDAP/Kerberos Authentication models
- Familiarity with open source configuration management and deployment tools such as Puppet or Chef and Linux scripting, Autosys
- Experience in Shell and Perl scripting and exposure to Python
- Knowledge of Troubleshooting Core Java Applications is a plus
- Exposure to Real-time Execution engines like Spark, Storm, Kafka
- Version control Management tools: Subversion or Clearcase or CVS, Github
- Experience in Service Management Ticketing tools - ServiceNow, Service Manager, Remedy