We are excited to partner with our client in seeking skilled Data Engineers to join their innovative team. This is a fully remote opportunity, allowing you to work from anywhere while contributing to cutting-edge data products and pipelines. These systems deliver high-quality, standardised data for analysts, both internally and across government sectors. These roles are instrumental in providing timely and impactful datasets in response to significant national events.
Role Overview
As a Data Engineer, you will play a key part in developing and maintaining robust data pipelines and ETL processes. You will contribute to high-profile projects, such as creating integrated data sharing platforms and supporting critical national initiatives.
Key Technologies:
* Python, PySpark, SQL
* Cloud platforms: Google Cloud Platform (GCP) or Amazon Web Services (AWS) with Cloudera
Key Responsibilities:
* Design and build data pipelines and ETL processes.
* Contribute to data validation packages and automation tools.
* Work collaboratively within a diverse team while also managing your own workload independently.
* Maintain high standards of coding, peer review, testing, and documentation.
* Investigate and resolve data quality issues with innovative solutions.
* Stay current with industry best practices and tools, identifying areas for innovation.
* Actively contribute to team development by sharing knowledge and leading sessions on new technologies or processes.
Ideal Candidate:
We are looking for candidates with a strong technical background, whether in data analysis, data science, or software engineering, who demonstrate a proactive mindset and a passion for quality work. If you are eager to expand your technical skills and contribute to a dynamic team, this role is for you.
If you are interested in exploring this opportunity further, please reach out for more details!
#J-18808-Ljbffr