Concirrus Concirrus is trusted by leading specialty insurers to enhance efficiency throughout the insurance lifecycle from acquisition to renewal. We automate processes such as submission ingestion, compliance checks, ESG monitoring, and portfolio management, allowing underwriters to focus on value-added activities. This results in increased operational efficiency and faster business acquisition. About you As a Senior Data Engineer you will be responsible for designing, building, and maintaining scalable and efficient data pipelines and architectures. In addition, you will be an advocate for Data Governance within the organisation to improve the data quality, meta data management, data cataloguing & lineage. The Role The Data Engineering team at Concirrus is tasked with building and managing a cloud-based data platform on GCP and AWS. In this role, you will develop and maintain a scalable Data Lakehouse leveraging the Apache Spark processing engine, built on the Delta ecosystem, and support federated query environments. You will collaborate closely with teams across business reporting, data science, and product development. Responsibilities include gathering and refining requirements, designing data architectures and solutions, and creating ETL pipelines using Airflow to integrate data into BigQuery and other systems. With a strong focus on analytics engineering, you are a highly motivated individual contributor who thrives in dynamic environments and enjoys tackling complex data challenges. Qualifications and Skills Must Have Skills: Degree in Computer Science or STEM based subjects. Strong Python programming skills as a Data Engineer. 3 years of work experience in a relevant role such as data or software development. Experience working on Cloud Data Platforms, preferably GCP. Ability to work with GCP services such as BigQuery, Cloud Composer, Dataproc, Pub/Sub, CloudRun. Extensive AirFlow development, deployment and orchestration experience on cloud environments, and ability to work with Airflow Operators (preferably with Google Cloud operators). Solid Apache Spark experience of at least 2 years. Strong Gitlab experience, CI/CD deployment via Git-Runner is expected. Datalake technologies, preferably Delta ecosystem. Experience building data governance platform on Datalake using Unity Catalog or Open Lineage. Nice to Have Skills: Exposure to development using Polars, DuckDB and Apache Arrow Rust based development experience. Experience working on Federated Querying Engine such as Trino/Starburst D&I Statement We aspire to build a diverse team and cultivate an inclusive environment where everyone is empowered to perform their best work. As we continue to learn and evolve, we are committed to: Providing a safe environment where all individuals feel accepted and valued. Continuing to educate ourselves and celebrate our differences. Ensuring that our behaviours align with our values and our dedication to DE&I. Regularly reviewing our performance and perpetually striving for improvement. What We Offer: A collaborative, fast-paced environment at the forefront of the insurtech revolution. Opportunities for professional growth and development. Competitive compensation, benefits, and a culture that values innovation and teamwork. Hybrid working – up to 3 days home working each week .