Lead Data Engineer Retelligence is partnering with a high-growth, forward-thinking organization specializing in digital innovation and marketing across international markets. This company is on an exciting journey, rapidly scaling its capabilities and leveraging advanced technology to deliver cutting-edge solutions. This is a fantastic opportunity to join a dynamic team within a business that values innovation, supports professional development, and offers exceptional career progression. The Role We are seeking a Lead Data Engineer to take a hands-on role in designing and delivering robust, real-time data pipelines and infrastructure. The company operates in a Google Cloud Platform (GCP) environment, and they are particularly interested in candidates with strong expertise in Python. As the Lead Data Engineer, you’ll play a critical role in shaping their data architecture and driving transformation. You’ll partner closely with engineering, product, and analytics teams to ensure efficient, high-performance data systems that enable the business to thrive in a fast-paced environment. Key Responsibilities Design, develop, and maintain scalable, real-time data pipelines and infrastructure in a GCP environment. Integrate multiple data sources to ensure seamless real-time data flow across the organization. Build and optimize data models for querying and analytics use cases. Develop fault-tolerant, highly available data ingestion and processing pipelines. Continuously monitor and improve pipeline performance for low-latency and high-throughput operations. Ensure data quality, integrity, and security across all systems. Implement effective monitoring, logging, and alerting mechanisms. Collaborate with product, engineering, and analytics teams to deliver tailored solutions that meet business needs. About You Strong hands-on experience in data engineering with expertise in Python. Proven track record of building and managing real-time data pipelines. In-depth experience working with Google Cloud Platform (GCP) and its associated tools for data ingestion and processing. Familiarity with distributed streaming platforms such as Kafka or similar technologies. Advanced knowledge of SQL Experience with data orchestration tools Ability to optimize and refactor data pipelines for improved performance and scalability. Strong problem-solving skills and the ability to thrive in a collaborative, fast-paced environment