Hadoop Administration
SonsoftInc
Posted: June 23, 2017
Interested in this position?
Create a free account to apply with AI-powered matching
Quick Summary
Hadoop Administration is a key role that involves implementing and administering Hadoop infrastructure, including designing and implementing Hadoop clusters, managing Hadoop resources, and ensuring data quality and security.
Required Skills
Job Description
Sonsoft , Inc. is a USA based corporation duly organized under the laws of the Commonwealth of Georgia. Sonsoft Inc. is growing at a steady pace specializing in the fields of Software Development, Software Consultancy and Information Technology Enabled Services.
• At least 4 years of experience in Implementation and Administration of Hadoop infrastructure
• At least 2 years of experience Architecting, Designing, Implementation and Administration of Hadoop infrastructure
• At least 2 years of experience in Project life cycle activities on development and maintenance projects.
• Should be able to provide Consultancy to client / internal teams on which product/flavor is best for which situation/setup
• Operational expertise in troubleshooting , understanding of system’s capacity, bottlenecks, basics of memory, CPU, OS, storage, and networks
• Hadoop, MapReduce, HBase, Hive, Pig, Mahout
• Hadoop Administration skills: Experience working in Cloudera Manager or Ambari, Ganglia, Nagios
• Experience in using Hadoop Schedulers - FIFO, Fair Scheduler, Capacity Scheduler
• Experience in Job Schedule Management - Oozie or Enterprise Schedulers like Control-M, Tivoli
• Good knowledge of Linux (RHEL, Centos, Ubuntu)
• Experience in setting up Ad/LDAP/Kerberos Authentication models
• Experience in Data Encryption technique
Responsibilities:-
• Upgrades and Data Migrations
• Hadoop Ecosystem and Clusters maintenance as well as creation and removal of nodes
• Perform administrative activities with Cloudera Manager/Ambari and tools like Ganglia, Nagios
• Setting up and maintaining Infrastructure and configuration for Hive, Pig and MapReduce
• Monitor Hadoop Cluster Availability, Connectivity and Security
• Setting up Linux users, groups, Kerberos principals and keys
• Aligning with the Systems engineering team in maintaining hardware and software environments required for Hadoop
• Software installation, configuration, patches and upgrades
• Working with data delivery teams to setup Hadoop application development environments
• Performance tuning of Hadoop clusters and Hadoop MapReduce routines
• Data modelling, Database backup and recovery
• Manage and review Hadoop log files
• File system management, Disk space management and monitoring (Nagios, Splunk etc)
• HDFS support and maintenance
• Planning of Back-up, High Availability and Disaster Recovery Infrastructure
• Diligently teaming with Infrastructure, Network, Database, Application and Business Intelligence teams to guarantee high data quality and availability
• Collaborating with application teams to install operating system and Hadoop updates, patches and version upgrades
• Implementation of Strategic Operating model in line with best practices
• Point of Contact for Vendor escalations
• Ability to work in team in diverse/ multiple stakeholder environment
• Analytical skills
• Experience and desire to work in a Global delivery environment
• Bachelor’s degree or foreign equivalent required from an accredited institution. Will also consider three years of progressive experience in the specialty in lieu of every year of education.
• At least 7 years of experience within the Information Technologies.
** U.S. citizens and those authorized to work in the U.S. are encouraged to apply. We are unable to sponsor at this time.
Note:-
• This is a Full-Time Permanent job opportunity for you.
• Only US Citizen, Green Card Holder, GC-EAD, H4-EAD & L2-EAD can apply.
• No OPT-EAD, TN Visa & H1B Consultants please.
• Please mention your Visa Status in your email or resume.