A day in the life As a Senior Data Engineer at ManyPets, you'll be at the forefront of our data transformation journey, playing a crucial role in our transformation into modern data warehouse powering the business and AI. You will deep-dive into complex ELT processes, design innovative solutions to integrate first-party and third-party data sources resiliently. As you work, you're constantly building data models that are effective and consumable, and you will translate business requirements into well-curated datasets that answer critical questions. Throughout the day, you'll wear multiple hats - from maintaining and improving our data platform to mentoring junior team members and championing best practices in data engineering. You might spend a few hours collaborating with upstream teams to ensure incoming data meets quality standards, then switch gears to work with analysts and data scientists, transforming raw data into actionable insights. The fast-paced environment keeps you on your toes, as you adapt to ever-changing business needs and lead creative initiatives to enhance our data infrastructure. As the day winds down, you might find yourself in a brainstorming session, exploring new ways to make ManyPets a leader in the data space. You're not just writing code; you're architecting the backbone of our data platform, working with both dimensional and transactional models to create a mature, scalable system. Your role is central to our success, as you balance technical expertise with business acumen, constantly pushing the boundaries of what's possible with our data. It's a challenging but rewarding position, where your leadership and innovation directly impact the company's data-driven decision-making capabilities. Your responsibilities Architect and build scalable, efficient data pipelines and models aligned with business needs optimising for performance and cost. Actively participate in mobbing and knowledge-sharing sessions, fostering a collaborative environment. Provide input on improving the scalability, reliability, and performance of the data platform. Identify and resolve issues to minimise downtime and ensure reliable data delivery. Support and enforce data governance policies, including data access, retention, and quality. Establish frameworks for metadata management, data lineage, and monitoring of data quality issues. Implement advanced strategies like data partitioning, indexing, and clustering to improve speed and reduce costs. Provide mentorship to junior data engineers, offering guidance on problem-solving, code reviews, and career development. Take ownership of your responsibilities while diplomatically managing workload and help build a culture of knowledge-sharing and continuous improvement within the team. Your skills and experience Strong proficiency in SQL for writing complex queries, optimising for performance, and managing large-scale datasets. Expertise in designing and implementing dimensional data models (e.g., Star/Snowflake schema) for analytics. Knowledge of event-driven design patterns and decoupling systems. Extensive experience in building and managing data workflows using Apache Airflow for orchestration and scheduling. Proficient in implementing data governance practices, including data ownership, lineage, and quality monitoring. Foundation in understanding of CI/CD principles, implementing automated testing, and deployments for data pipelines. Proficient in Python and using it to support the data warehouse