Summary: Develop large-scale data calculations using Python and pandas, design and build new data models and ETL processes, and maintain existing data solutions to meet changing business needs.
Key Responsibilities:
* Develop and implement large-scale data calculations using Python and pandas.
* Design and build new data models.
* Design and build new extract-transform-load (ETL) processes.
* Maintain and update existing data solutions to meet changing business needs.
* Provide technical support and training to end-users.
* Document data solutions, including process design, development, and maintenance procedures.
* Stay current on trends and technologies in the data engineering space.
Essential Skills and Experience:
* Proven experience delivering and building data solutions.
* Expertise in the Microsoft SQL Server Technology Stack.
* Knowledge of standard data warehousing concepts, including Data Marts, Star and Snowflake Schema.
* Experience with Azure PaaS/IaaS technologies, including Synapse, Data Factory, SQL Database, Databricks, and Data Lake Storage.
* Proficiency in programming and analytics with Python, T-SQL, and ANSI SQL.
Desirable Skills:
* Data mining skills with R, Numpy, or Scala.
* Experience in Agile software development methodologies, including CI, CD, TDD, and automation.
* Familiarity with DevOps and continuous delivery practices.
* Understanding of wider Microsoft Azure capabilities.
How to Apply: If you are a motivated Data Engineer with a passion for developing robust data solutions, apply now to join our team and help us drive our data initiatives forward