About the Role
· Analyze system requirements and design responsive algorithms and solutions
· Use big data and cloud technologies to produce production quality code
· Engage in performance tuning and scalability engineering
· Work with team, peers and management to identify objectives and set priorities
· Perform related SDLC engineering activities like sprint planning and estimation
· Work effectively in small agile teams
· Provide creative solutions to problems
· Identify opportunities for improvement and execute
Essential skills:
· Experience with cloud based Big Data technologies
· Proficiency in Hive / Spark SQL
· Experience with Spark
· Experience with one or more programming languages like Scala, Python, and/or Java
· Ability to push the frontier of technology and independently pursue better alternatives
Core Skills: Hive, SQL, Spark, Hadoop, Python, Big Data, AWS