To apply, please submit your CV and a cover letter (we would like to know about your experience in developing tools and processes which improve data quality and reduces manual effort).
About Smart Data Foundry
Smart Data Foundry has a purpose to unlock the power of financial data as a force to improve people’s lives.
We enable the research ecosystem to flourish through the provision of research ready real financial data.
We create data-driven insights based on real financial data, that identify areas to inform policy change and enhance regulation.
Inspire financial innovation with aizle™, Smart Data Foundry’s synthetic data engine, creating high utility synthetic data with no privacy issues when real-world data doesn’t exist or cannot be safely shared.
Our Data Engineer develops tools and processes which ensure the quality of our data, reduces the manual effort involved in our work as we scale.
Contributing to the design and development of data processing architecture and ongoing pipeline optimisation.
Developing and owning data cleaning, quality and validation processes.
Working with data scientists, product owners, data partners and other internal and external customers to understand and deliver team commitments.
Working with customers and partners to integrate data from multiple data sources, e.g. APIs (Application Programming Interfaces), sftp, manual file transfers, etc.
Monitoring system performance and contributing to infrastructure optimisation, considering utility and cost.
Testing and mentoring of less experienced software engineers and quality engineers.
Working with the synthetic data, product and engineering teams to build a cloud-based, data driven SaaS platform.
Proven programming experience with Python in a Linux environment.
Good understanding of software development practices such as code reviews, unit testing, data validation, TDD, source management, continuous integration, and releasing software to live environments.
Knowledge of large-scale data processing technology, for example: Strong experience of batch ETL, data warehousing, database querying and building secure and scalable applications in a cloud-based environment, preferably AWS (Amazon Web Services).
Experience with modern software architecture paradigms such as microservices, event-driven, & serverless.
Experience of using Jira Software or similar project management tools.
Ability to deal with ambiguity and react quickly in an evolving and fast paced environment
Collaborative, highly curious, with a desire to understand how Smart Data Foundry delivers its’ services
There may be a requirement to work on data partners sites, directly with data partner teams to build pipeline to make it easy to share data with us
Appreciation of the sensitive nature of the financial data held by Smart Data Foundry and the need to abide by information governance best practices
This role is based in our office in Edinburgh Futures Institute in Edinburgh, therefore, candidates must be within commutable distance or willing to relocate.
Applicants must have the existing and ongoing right to work in the U.K. without restrictions as sponsorship for visas is not provided.
We give you support, nurture your talent and reward success. You will benefit from a competitive reward package, 40 days annual leave (including 6 bank hols), a defined contribution pension scheme (14% employer and 4.5% minimum employee contribution rates), family friendly initiatives and flexible working.