Salary: 104,000 - 104,000 GBP per year Requirements:
* I am looking for a contractor with strong expertise in building, scheduling, and maintaining data pipelines. You should have good experience in Pyspark, Spark SQL, Hive, Python, and Kafka. A strong background in data collection and integration, scheduling, data storage and management, as well as ETL (Extract, Transform, Load) processes is essential. Additionally, you should possess knowledge of relational and non-relational databases, such as MySQL, PostgreSQL, and MongoDB. Excellent written and verbal communication skills are important, as is experience in managing business stakeholders for requirements clarification.
Responsibilities:
* In this role, you will work closely with our development team to assess the existing Big Data infrastructure. You will design and code Hadoop applications to analyze data compilations, create data processing frameworks, and extract and isolate data clusters. Testing scripts to analyze results and troubleshoot bugs will be a key part of your responsibilities, as will creating data tracking programs and documentation. You will also be tasked with maintaining security and data privacy throughout our processes.
Technologies:
* Big Data
* ETL
* Hadoop
* Hive
* Kafka
* MongoDB
* MySQL
* PostgreSQL
* Python
* PySpark
* SQL
* Security
* Spark
More:
We are partnered with a leading global consultancy and are excited to find the right candidate for a long-term contract within the energy sector. This role will be hybrid, based in Windsor, with a competitive rate of up to £400 per day (inside IR35) for an initial duration of 6 months, with a view to extend. If you are interested and have the relevant experience, I encourage you to apply promptly, as I would love to discuss this opportunity with you further.