About the Role:
As a Machine Learning Operations (MLOps) Engineer, you’ll be responsible for managing, releasing and monitoring Machine Learning (ML) and Artificial Intelligence (AI) artefacts using automated frameworks. You’ll also optimise ML/AI code written by our Data Scientists into Production-ready software according to agreed performance and cost criteria.
You’ll play a key role ensuring that ML/AI projects are set up for success via the automation of residual manual steps in the development and production lifecycle. You’ll also provide essential insights into the ongoing predictive capability and cost of deployed ML/AI assets using language and visualisations appropriate for your audience.
It’s an opportunity to work across multiple projects concurrently. You’ll use your judgement to determine which projects and teams need most of your time. You’ll contribute to early engagements through strong communication skills, domain experience, and knowledge gathered throughout your career.
This role requires you to have adept time-management and prioritisation skills to keep on top of your responsibilities. You’ll use your cross-project exposure to feedback to the Data & Data Science Leadership Team to guide understanding, improve consistency, and develop & implement initiatives to improve the community for the future.
Who we’re looking for:
You’ll need strong experience delivering and monitoring scalable ML/AI solutions via automated ML Ops. Ideally, you’ll also be technically skilled in most or all of the below:
1. Expert knowledge of Python and SQL, inc. the following libraries: Numpy, Pandas, PySpark, and Spark SQL
2. Expert knowledge of ML Ops frameworks in the following categories:
1. Experiment tracking and model metadata management (MLflow)
2. Orchestration of ML workflows (Metaflow)
3. Data and pipeline versioning (Data Version Control)
4. Model deployment, serving, and monitoring (Kubeflow)
3. Expert knowledge of automated artefact deployment using YAML-based CI/CD pipelines and Terraform
4. Working knowledge of one or more ML engineering frameworks: TensorFlow, PyTorch, Keras, Scikit-Learn
5. Working knowledge of object-oriented programming and unit testing in Python
6. Working knowledge of application and information security principles and practices: OWASP for Machine Learning
7. Working knowledge of Unix-based CLI commands, source control, and scripting
8. Working knowledge of containerisation: Docker and container orchestration (Kubernetes)
9. Working knowledge of a cloud data platform: Databricks and a data lakehouse architecture (Delta Lake)
10. Working knowledge of the AWS cloud technology stack: S3, Glue, DynamoDB, IAM, Lambdas, ELB, EKS
Rewards and benefits:
As you help us to shape the future, we’ve shaped our rewards and benefits to help you thrive and support your lifestyle:
* Competitive salary
* Discretionary group performance-based bonus
* 25 days annual leave (plus Bank Holidays)
* Single cover private medical insurance
* Pension scheme
We’re committed to making a tangible impact on the climate challenge we all face. Drax is where your individual purpose can work alongside your career drive. We work as part of a team that shares a passion for doing what’s right for the future. With Drax, you can shape your career and a future for generations to come. Together, we make it happen.
At Drax, we’re committed to fostering an environment where everyone feels valued and respected, regardless of their role. To make this a reality, we actively work to better represent the communities we operate in, foster inclusion, and establish fair processes. Through these actions, we build the trust needed for all colleagues at Drax to contribute their perspectives and talents, no matter their background. Find out more about our approach here.
How to apply:
Think this role’s for you? Click the ‘Apply now’ button to begin your Drax journey. If you want to find out more about Drax, check out our LinkedIn page to see our latest news.
#J-18808-Ljbffr