A fantastic opportunity has arisen for a Data Engineer to join the Pavers Head Office Team at Northminster Business Park, York. This is a new position to join our growing Data Team. This role is offered as a full-time permanent opportunity in a growing business where you are encouraged to develop your skills in a supportive environment. Our Data Engineering & Analytics Team have a key role to play in the business’s operations, and as a Data Engineer, you will assist in making data available in our Enterprise Data Hub so that our Analysts can draw out the insights and present to the business. The business is continually growing and evolving, moving at a fast pace, and good quality insightful data is key to ensuring this continues. You’ll report into the Data Engineering Manager in the growing Data Engineering & Analytics Team. Your main responsibility will be to you will assist in the provision a new Enterprise Data Hub across the Pavers Group, so that it can be used in key reporting and analytic dashboards, whilst ensuring data is secure, accessible, and conforms to the relevant compliance legislation. The role will suit an individual who is highly technical, loves data, and is keen to learn and grow in a fast-paced environment. This is a great opportunity to help shape the future landscape of Data Engineering and Analytics using cutting edge technologies. Let’s see what’s in it for you: Salary up to £47,000, depending on experience Discretionary Annual Bonus scheme Generous colleague discount scheme, some of which can be shared with your family and friends And that’s not all, working for Pavers comes with so much more to enjoy: Free onsite parking at York Head Office Death In Service Benefit Holiday Entitlement (Increases with service) Company Contribution Pension Access to RetailTRUST (Wellbeing & Financial Support) Access to the Pavers Foundation: employee-led grant application and charitable giving scheme Access to wider training and development opportunities through Pavers Academy Want to see a snapshot of your duties? Deliver first-class software solutions, which are secure, appropriately tested, perform well, and help provide an engaging customer experience Actively improve your engineering skills increasing your mastery level of our stack Play a proactive part in owning your team’s services, taking responsibility for support, monitoring, measuring performance and addressing technical issues when required Contribute enthusiastically to our continuous improvement of coding practices, application quality, tooling and agile processes Communicate constructively with peers, seniors and stakeholders in all settings Continuously evaluate team’s processes and procedures to maintain a positive and efficient engineering culture Directly contribute to the design and code of the data pipelines operating on secure, production data Contribute to the end-to-end design and implementation of common components that improve our ability to write efficient and reliable data pipelines Perform well-defined engineering tasks in a reasonable amount of time; doesn’t get caught up in the unknown; work to figure it out Actively embrace challenges, particularly when they offer the potential to create significant impact or value Endeavour to become an expert with at least one area or domain of the code/product Create and maintain quality technical documentation for any software you have developed, ensuring maintainability, readability and testability Conduct code reviews, pair programming and knowledge sharing sessions, embracing feedback at every step Involve yourself in code deployments, including raising and completing change requests to deploy to production Help grow the engineering team’s community presence through contributing to conferences, meet-ups, blog posts, open-source projects) Works effectively with other teams across the business, ensuring plans are timely and accurate Here’s what we are looking for to be our Data Engineer Experience of working with and designing a platform, implementing best practices around data processing on an industrial scale Ability to deliver on projects of varied complexity across multiple SBG domain Strong software development skills in an object-oriented environment. Proficient with Python, coding, and testing patterns, being fundamental, detailing in a clear and concise manner Solid knowledge of data modelling principles, including data lakes and warehousing tools and techniques Hands-on experience with ingesting and processing streamed data (e.g. RabbitMQ, Kafka, PubSub etc.) and data flow orchestration (Data Flow, Apache NiFi, Airflow, Luigi etc.) Strong understanding of the SDLC of data products, relevant CI/CD tools (Jenkins, Spinnaker, TeamCity) and containerisation (Docker, Kubernetes, Helm) Cloud Engineering experience would be beneficial if you don’t have GCP experience, Azure, AWS, Cloudera DP would all suffice Effective communicator both with individuals and in group settings