Product and Tech • Flexible remote • UK - LONDON
We are looking for a Machine Learning Engineer with strong data science roots and real-world deployment experience to help shape, optimise and scale our ML stack as we continue to grow.
YuLife is an award-winning InsurTech company and an employee benefit provider.
We’re the world’s first “lifestyle insurance” company. While other insurers are there for people at the point of death or illness, we engage with people every day to help them live better lives.
To do that, we’ve built an award-winning app that rewards people for building healthy habits. It includes the best wellbeing and digital health tools in the world.
Customers can earn vouchers from great brands like Amazon, Tesco or ASOS for doing simple things like walking or practicing mindfulness. They can even do good by planting trees or cleaning the oceans from the app.
Our clients include Tesco, Capital One and Fujitsu, and we’ve been ranked the #1 employee benefit in the UK on Trustpilot. More recently, YuLife was recognised by CX Insurance Awards as ‘Best Insurtech 2024’ and the 8th Fastest Growing Technology Company in the UK in the prestigious 2023 Deloitte Technology Fast 50.
The role:
We’re looking for an experienced Machine Learning Engineer to join our growing data science team. This is a hands-on role for someone who has grown from a data science background and has transitioned into machine learning engineering. You'll be working alongside our Lead Data Scientist and the wider data team to evolve and scale our ML infrastructure.
You’ll play a key role in building, deploying and optimising both real-time and batch ML models, ensuring they run smoothly and add measurable value to our product and users.
Day to day responsibilities include, but are not limited to:
1. Designing, building, and deploying production-grade ML pipelines and services in Python
2. Collaborating with data scientists and data engineers to productionise models and guide them through deployment best practices
3. Improving the scalability, reliability, and automation of our ML infrastructure and workflows
4. Developing systems for both real-time (low-latency) and batch inference use cases
5. Working with AWS and other cloud services (e.g. Fargate, ECS, Lambda, S3, SageMaker, Step Functions) to deploy and monitor models in production
6. Implementing and maintaining CI/CD pipelines for ML workflows, ensuring repeatability and robust version control
7. Setting up monitoring and alerting frameworks to track model drift, data quality, and inference health
8. Leveraging orchestration tools such as Dagster, Airflow, or Prefect to manage and scale ML workflows
9. Supporting ongoing infrastructure migration or optimisation initiatives (e.g. improving cost efficiency, latency, or reliability)
10. Partnering with product and engineering teams to ensure ML solutions are aligned with business goals, and that performance metrics and outcomes are clearly tracked
11. Documenting and continuously evolving ML engineering best practices
The ideal candidate will have:
1. 3+ years of hands-on experience in ML engineering, ideally having started in data science with a strong foundation in a quantitative field such as mathematics, statistics, physics, or computer science
2. Strong Python programming skills with experience in building and maintaining production ML applications
3. Demonstrated experience deploying ML models into production environments, including both batch and real-time/streaming contexts
4. Proficiency working with distributed computing frameworks such as Apache Spark, Dask, or similar
5. Experience with cloud-native ML deployment, particularly on AWS, using services like ECS, EKS, Fargate, Lambda, S3, and more
6. Familiarity with orchestration and workflow scheduling tools such as Dagster, Airflow, or Prefect
7. Knowledge of CI/CD best practices and tools (e.g. GitHub Actions, Jenkins, CodePipeline)
8. Exposure to monitoring and observability tools for ML systems (e.g. Prometheus, Grafana, DataDog, WhyLabs, Evidently, etc.)
9. Experience in building parallelised or distributed model inference pipelines
Nice-to-Have Skills
1. Familiarity with feature stores and model registries (e.g. Feast, MLflow, SageMaker Model Registry)
2. Knowledge of model versioning, A/B testing, and shadow deployments
3. Experience implementing or contributing to MLOps frameworks and scalable deployment patterns
4. Experience with containerisation and container orchestration (Docker, Kubernetes)
5. Comfortable working in a fast-paced, cross-functional team with product and engineering stakeholders
What you’ll get:
We like to give more than we take so here are some of our benefits:
* Potential to earn share options
* 6x salary life assurance
* Health Insurance
* Income protection
* 3% company contribution to pension via salary sacrifice scheme
* 25 days Annual Leave + 1 Love being Yu (e.g your birthday, moving house anything else that is for Yu!)
* Access to the YuLife app (which includes a tonne of well-being rewards, discounts and exclusive offers as well as access to Meditopia and Fiit )
* £20 per month to a "be your best Yu" budget
* Unlimited Monthly professional coaching with More Happi
* Remote and flexible working
* Currently our lovely office in Shoreditch is available if people want (and only if they want) to use it
Here at YuLife our values encompass Love Being Yu and as a result we’re committed to diversity and inclusion. We are an Equal Opportunity Employer and do not discriminate against any employee or applicant for employment because of race, colour, sex, age, national origin, religion, sexual orientation, gender identity and/or expression, disability or any other protected class.
#J-18808-Ljbffr