Data Engineer - Analytics/Modelling
Location: Birmingham/London, UK
Mode: Hybrid
Responsibilities
1. Lead the design and implementation of AWS-based data products that leverage reusable datasets.
2. Collaborate on creating scalable data solutions using a range of new and emerging technologies from the AWS platform.
3. Demonstrate AWS data expertise when communicating with stakeholders and translating requirements into technical data solutions.
4. Manage both real-time and batch data pipelines. Our technology stack includes a wide variety of technologies such as Kafka, AWS Kinesis, Redshift, and DBT.
5. Design and model data workflows from ingestion to presentation, ensuring data security, privacy, and cost-effective solutions.
6. Create a showcase environment to demonstrate data engineering best practices and cost-effective solutions on AWS.
7. Build a framework suitable for stakeholders with low data fluency to enable easy access to data insights and facilitate informed decision-making.
Requirements
1. Expertise in the full data lifecycle: project setup, data pipeline design, data modelling and serving, testing, deployment, monitoring, and maintenance.
2. Strong data architecture background in cloud-based architectures (SaaS, PaaS, IaaS).
3. Proven engineering skills with experience in Python, SQL, Spark, and DBT, or similar frameworks for large-scale data processing.
4. Deep knowledge of AWS services relevant to data engineering, including AWS Glue, AWS EMR, Amazon S3, Redshift.
5. Experience with Infrastructure-as-Code (IaC) using Terraform or AWS CloudFormation.
6. Proven ability to design and optimize data models to address data quality and performance issues.
7. Excellent communication and collaboration skills to work effectively with stakeholders across various teams.
8. Ability to create user-friendly data interfaces and visualizations that cater to stakeholders with varying levels of data literacy.
#J-18808-Ljbffr