We are seeking a highly motivated and detail-oriented Big Data Platform Development Engineer to join the team.
The ideal candidate will be responsible for onboarding new users and clients, setting up build pipelines, reviewing code and queries, and performing coding, testing and performance tuning of jobs. The candidate should have a strong technical background in Big Data technologies, such as Hadoop, Spark, and Hive, and be comfortable working with complex systems. The candidate should also have excellent communication and organizational skills to collaborate with both internal and external stakeholders.
Candidate will have the opportunity to be a contributor to the Markets Data Strategy and contribute towards the goal of increasing revenue by providing key analytics/MIS/metrics for decision making, whilst also improving regulatory quality and accuracy. The candidate will work directly with bright and innovative individuals on both the business and technology side, giving broad exposure to front-to-back trade processing and electronic trading across all asset classes. The successful candidate can make a significant difference to business performance.
Key Responsibilities:
1. Onboard new users and clients to the Big Data platform.
2. Collaborate with cross-functional teams to identify and prioritize data requirements, data quality issues, and performance bottlenecks.
3. Setup build pipelines to automate the build and deployment process.
4. Conduct code reviews, detecting suboptimal queries and performance issues.
5. Guide teams to follow optimal industry standard practices.
6. Design and develop highly scalable and fault-tolerant big data solutions on the Cloudera platform.
7. Monitor and optimize risk data platform for performance, scalability and cost-effectiveness.
Qualifications:
1. Bachelor’s or master’s degree in Computer Science, Information Systems, or a related field.
2. 5+ years of experience in big data technologies, with at least 3 years of experience in Apache Spark and Cloudera.
3. Strong knowledge of big data technologies, including Hadoop, Hive, HBase, Kafka, and YARN.
4. Excellent programming skills in Python and/or Scala.
5. Strong analytical, problem-solving, and communication skills.
6. Ability to work independently and collaboratively in a fast-paced, agile environment.
Seniority level
Mid-Senior level
Employment type
Contract
Job function
Information Technology
Industries
Technology, Information and Media
#J-18808-Ljbffr