Job Responsibilities:
1)Proven experience in in SQL, Spark, Hadoop ecosystem
2)Have worked on multiple TBsof data volume from ingestion to consumption
3)Work with business stakeholders to identify and document high impact business problems and potential solutions
4)Good understanding of Data Lake/Lakehouse architecture and experience/exposure to Hadoop (cloudera, hortonworks) and/or AWS
Work on end-to-end data lifecycle from Data Ingestion, Data Transformation and Data Consumption layer. Versed with API and its usability.
A suitable candidate will also be proficient in Spark, Spark Streaming, hive, SQLs
5)A suitable candidate will also demonstrate experience with big data infrastructure inclusive of MapReduce, Hive, HDFS, YARN, HBase, Oozie, etc.
6)The candidate will additionally demonstrate substantial experience and a deep knowledge of relational databases.
7)Good skills in technical debugging of the code in case of issues. Also, working with git for code versioning
8)Creating Technical Design Documentation of the projects/pipelines.
Eligibility Criteria:
1)BE/BTECH
2)Year of Passing: 2016 to 2023
3)Experience: 0 to 3 years
Location: Mumbai
Apply Link: Click Here