Roles And Responsibilities:
Hadoop development and implementation.
Loading from disparate data sets.
Pre-processing using Hive and Pig.
Designing, building, installing, configuring and supporting Hadoop.
Translate complex functional and technical requirements into detailed design.
Perform analysis of vast data stores and uncover insights.
Maintain security and data privacy.
Create scalable and high-performance web services for data tracking.
Managing and deploying HBase.
Being a part of a POC effort to help build new Hadoop clusters.
Test prototypes and oversee handover to operational teams.
Propose best practices/standards.
Good knowledge in back-end programming, specifically java, JS, Node.js and OOAD
Good knowledge of database structures, theories, principles, and practices.
Ability to write Pig Latin scripts.
Hands on experience in HiveQL.
Familiarity with data loading tools like Flume, Sqoop.
Knowledge of workflow/schedulers like Oozie.
Analytical and problem solving skills, applied to Big Data domain
Proven understanding with Hadoop, HBase, Hive, Pig, and HBase.
Good aptitude in multi-threading and concurrency concepts.