This job is brought to you by Jobs/Redefined, the UK's leading over-50s age inclusive jobs board.
About the role
We're on a mission to make affordable loans available to more people. Using the power of Open Banking, we have built state-of-the-art technology that allows us to look beyond traditional credit scores and offer fairer credit to people ignored by traditional lenders.
We have two parts of our business. On the consumer side, we have Abound. Abound has proven that our approach works at scale, with over £300 million lent to-date. While other lenders only look at your credit score, we use Open Banking to look at the full picture - what you earn, how you spend, and what's left at the end.
On the B2B side, we have Render. Render is our award-winning software-as-a-service platform that allows Abound to make better, less risky lending decisions. And less risky decisions mean we can offer customers better rates than they can usually find elsewhere. We're taking Render global so that more companies, from high-street banks to other fintechs, can offer affordable credit to their customers.
We are looking for a highly motivated Data Engineer to be a key part in the deployment and maintenance of our data infrastructure, as we try to break the traditional consumer lending models. The ideal candidate will have experience in building and maintaining ETL pipelines and data warehouses.
Our tech stack: AWS Services (ECS, EC2, RDS, IAM, Cloudwatch, CloudTrail, KMS, Secrets Manager), Azure DevOps, Python, Cloudformation, AWS CDK, Postgres, MySQL, NewRelic
Who you are:
* 3+ years of professional experience in data engineering
* Strong expertise in building and maintaining ETL pipelines
* Proficiency in SQL and experience with various databases (e.g., MySQL, PostgreSQL)
* Proficiency in a scripting language (preferably Python)
* Experience with data warehousing solutions (e.g., Amazon Redshift, Google BigQuery, Snowflake)
* Experience with CI/CD tools and processes
* Proficiency in Git for version control
* Strong problem-solving and analytical skills
* Effective communication and collaboration skills, especially with data scientists and analysts
* Experience with cloud platforms (e.g., AWS, GCP, Azure)
* Knowledge of data modelling, data governance, and data quality best practices
What you'll be doing:
* ETL Pipeline Development: Design, build, and maintain efficient and reliable ETL pipelines to extract, transform, and load data into our data warehouse.
* Collaboration: Work closely with data scientists, analysts, and other stakeholders to understand data requirements and ensure the availability and quality of data.
* Data Integration: Integrate data from various internal and external sources, ensuring data consistency and accuracy.
* Performance Optimisation: Optimise ETL processes for performance, scalability, and efficiency.
* Data Quality: Implement data quality checks and monitoring to ensure the integrity and accuracy of data in the data warehouse.
* Infrastructure Management: Manage and optimise the data infrastructure, ensuring high availability and reliability.
* Version Control: Use Git for version control to maintain and manage codebases for ETL processes.
* CI/CD Pipelines: Develop and maintain CI/CD pipelines for automated deployment of ETL processes.
* Monitoring and Troubleshooting: Monitor ETL processes, troubleshoot issues, and implement solutions to prevent data pipeline failures.
* Documentation: Document ETL processes, data workflows, and data structures to ensure knowledge sharing and maintainability.
What we offer
* Everyone owns a piece of the company - equity
* 25 days' holiday a year, plus 8 bank holidays
* 2 paid volunteering days per year
* One month paid sabbatical after 4 years
* Employee loan
* Free gym membership
* Save up to 60% on an electric vehicle through our salary sacrifice scheme with Loveelectric
* Team wellness budget to be active together - set up a yoga class, a tennis lesson or go bouldering
#J-18808-Ljbffr