Job Description Location: London, Hybrid We are seeking a talented Data Engineer to join one of our customer-facing tribes, where you'll play a key role in enabling data-driven insights and decision-making. In this position, you will work across all squads within the tribe, ensuring that analysts, data scientists, and other stakeholders have seamless access to high-quality, scalable data to support model development, analytics, and dashboard creation. By deeply understanding the tribe's data needs, you will help create robust solutions that empower teams to leverage data effectively. Collaboration is at the heart of this role, as you will work closely with software engineers, data scientists, analysts, and data governance experts to align on requirements and goals. Additionally, you will partner with data engineers from other tribes and the centralised data platform team to establish and uphold best practices and standards. Your expertise will ensure that data engineering needs are fully integrated into the planning process, supporting the delivery of impactful, scalable data solutions across the organisation. What you’ll do Play a leading role in the development of Gousto-wide data engineering best practices. Work with software engineers to ensure that data engineering best practices are adopted within the tribe. Design and roll out new data engineering pipelines and services. Improve existing data engineering pipelines and services. Define and implement MLOps best practices for data science products. Champion data quality by developing processes and systems to maintain data integrity. Advise the tribe management team on the required data engineering capabilities for all OKR work as part of the planning process. Who you are Experience in a senior data engineering or data platform engineer role, with a highly developed understanding of distributed computing systems, using Spark or PySpark Experience working with a number of analytics based services across modern cloud technologies, such as AWS Fluent in Python and SQL Experience with Databricks (other modern data warehouses are relevant too) Experience administering AWS services using IaC toolsets (we use Terraform, but others are relevant) Experience with modern data deployments using version control, CI/CD tooling and testing frameworks The ability to communicate effectively with team members and stakeholders, with proven experience in articulating technical concepts Experience managing projects from a design and implementation perspective and ensuring this is communicated fluidly to stakeholders The ability to understand stakeholder requirements and implement these into data services Passionate about data with a demonstrable attention to detail, evidence of a problem-solving mindset, and a positive attitude Experience working closely with Data Scientists to understand and deliver data requirements Experience with deploying and monitoring machine-learning algorithms in production Exposure to any of the following would be useful, but not essential: Data Mesh, Spark streaming, Experimentation