Job Description Verisk Maplecroft are looking for a Data Engineer who thrives on building efficient, bespoke and high-impact data pipelines. As a Data Engineer at Maplecroft, you will be tasked with building and maintaining the automated pipelines responsible for the continuous delivery of our risk index data that form our Maplecroft’s Global Risk Data. About the Day to Day Responsibilities of the Role Design, build, and optimize data pipelines with Python. Streamline and scale data workflows using Metaflow, Prefect, AWS and proprietary data services Collaborate with data scientists, analysts, and developers to ensure seamless data flow and implementation of bespoke company methodologies Improve data quality, reliability, and accessibility across teams. Deliver high-quality, maintainable well-tested code that meets business requirements Enable a consistent approach to our data production pipeline Qualifications About You and How You Can Excel in This Role Required A proven track record of developing in Python An ability to meet pipeline requirements through an applied understanding of good data acquisition, transformation and manipulation techniques Established experience of building ETL frameworks and tooling Knowledge of common Python data analysis libraries (numpy, pandas) Familiarity with Agile software development practices Good understanding of git and working collaboratively on team-level code base Useful to have Familiarity working with geospatial data within Python (GDAL, rasterio, shapely) Knowledge of cloud technologies and platforms such as AWS Experience using docker or other container orchestration technologies Experience of the Linux command line and basic Linux server administration skills