JLL empowers you to shape a brighter way.
Our people at JLL and JLL Technologies are shaping the future of real estate for a better world by combining world class services, advisory and technology for our clients. We are committed to hiring the best, most talented people and empowering them to thrive, grow meaningful careers and to find a place where they belong. Whether you’ve got deep experience in commercial real estate, skilled trades or technology, or you’re looking to apply your relevant experience to a new industry, join our team as we help shape a brighter way forward.
As a Senior Data Engineer, you will lead the development of scalable data pipelines and integration of diverse data sources into the JLL Asset Beacon platform. Your work will focus on consolidating financial, operational, and leasing data into a unified platform that delivers accurate insights for commercial real estate asset management. Collaborating closely with internal developers and stakeholders, you will gather requirements, solve integration challenges, and ensure seamless data flows to support informed decision-making.
In addition to technical responsibilities, you will mentor junior engineers, promote best practices in data engineering and maintain high-quality standards through code reviews. Leveraging tools like Spark, Airflow, Kubernetes, and Azure, you will enhance the platform's performance, reliability, and scalability. Your expertise in data ecosystems will play a critical role in driving innovation and enabling advanced data-driven solutions for the evolving needs of the real estate industry.
Responsibilities:
1. Design and implement scalable, efficient, and robust data pipelines.
2. Support and maintain the data platform to ensure reliability, security, and scalability.
3. Work closely with internal developers and stakeholders to gather requirements, deliver insights, and align project goals.
4. Mentor junior engineers, fostering their growth through knowledge sharing and guidance.
5. Conduct code reviews to maintain quality and consistency.
Our Technologies:
* Data Processing: Spark
* Workflow Orchestration: Airflow
* Containerization: Kubernetes
* Cloud: Azure
* Data APIs and Semantic Layer: CubeJS
Required Experience:
* Educational Background: A STEM degree, preferably in Computer Science or Computing.
* Professional Experience: At least 5 years of experience in data engineering, data warehousing, or a related field.
* Strong Python and PySpark experience.
* SQL skills are essential.
* Experience with data orchestration platforms or tools such as Airflow, ADF, or SSIS.
* Solid understanding of data modeling principles and data warehousing concepts.
* Domain Knowledge: Financial or real estate experience is advantageous but not required.
Location: On-site – Bristol, GBR
#J-18808-Ljbffr