Context The client has Basel III Capital Reporting deadlines and requires assistance to support the development of its capital and RWA reporting AWS-based platform; that processes data from upstream systems, executes QA models, and performs aggregation and reporting. Outputs are produced out of its Hadoop ecosystem, and engineers are required with an excellent understanding of Scala and Spark where 99% of the jobs are written, supported with good knowledge of Hadoop/Hive and Java Must Have strong core programming and ETL skills with: Scala Spark Experience in developing complex data transformation workflows (ETL) using big data technologies Hands on experience in fine tuning Spark jobs 6 years' experience Nice to Have experience Hadoop Big data Hadoop (HDFS, Hive, HBASE, Impala to a lesser extent) Java with multithreadingand distributed computing AWS' ADZN1_UKTJ