Join our client in embarking on an ambitious data transformation journey using Databricks, guided by best practice data governance and architectural principles. To support this, we are recruiting for talented data engineers. As a major UK energy provider, our client is committed to 100% renewable energy and sustainability, focusing on delivering exceptional customer experiences. It is initially a 3-month contract with potential to be extended. The role is Hybrid, with one day a week being based in their Nottingham office, this is negotiable. It is a full-time role, 37 hours per week. Accountabilities: Develop and maintain scalable, efficient data pipelines within Databricks, continuously evolving them as requirements and technologies change. Build and manage an enterprise data model within Databricks. Integrate new data sources into the platform using batch and streaming processes, adhering to SLAs. Create and maintain documentation for data pipelines and associated systems, following security and monitoring protocols. Ensure data quality and reliability processes are effective, maintaining trust in the data. Be comfortable with taking ownership of complex data engineering projects and develop appropriate solutions in accordance with business requirements. Able to work closely with stakeholders and managing their requirements. Actively coach and mentor others in the team and foster a culture of innovation and peer review within the team to ensure best practice. Knowledge and Skills: Extensive experience of Python preferred, including advanced concepts like decorators, protocols, functools, context managers, and comprehensions. Strong understanding of SQL, database design, and data architecture. Experience with Databricks and/or Spark. Knowledgeable in data governance, data cataloguing, data quality principles, and related tools. Skilled in data extraction, joining, and aggregation tasks, especially with big data and real-time data using Spark. Capable of performing data cleansing operations to prepare data for analysis, including transforming data into useful formats. Understand data storage concepts and logical data structures, such as data warehousing. Able to write repeatable, production-quality code for data pipelines, utilizing templating and parameterization where needed. Can make data pipeline design recommendations based on business requirements. Experience with data migration is a plus. Open to new ways of working and new technologies. Self-motivated with the ability to set goals and take initiative. Driven to troubleshoot, deconstruct problems, and build effective solutions. Experience of Git / Version control Experience working with larger, legacy codebases Understanding of unit and integration testing Understanding and experience with CI/CD and general software development best practices A strong attention to detail and a curiosity about the data you will be working with. A strong understanding of Linux based tooling and concepts Knowledge and experience of Amazon Web Services is essential Please note: Should your application be successful, and you are offered the role, a number of pre-employment checks need to be carried out before your appointment can be confirmed. Any assignment offer with our client will be subject to a satisfactory checking report from the Disclosure Barring Service. This vacancy is being advertised by Rullion Ltd acting as an employment business. Since 1978, Rullion has been securing exceptional candidates for a range of clients; from large well-known brands, to SMEs and start-ups. As a family-owned business, Rullion's approach is credible and honest, focused on building long-lasting relationships with both clients and candidates. We celebrate and support diversity and are committed to ensuring equal opportunities for both employees and applicants. Rullion celebrates and supports diversity and is committed to ensuring equal opportunities for both employees and applicants. ADZN1_UKTJ