Inizio, the world’s leading healthcare and communications group providing marketing and medical communications services to healthcare clients. We have 5 main divisions within the group Medical, Advisory, Engage, Evoke and Biotech. Our Medical Division focuses on communicating evidence on new scientific and drug developments and educating healthcare professionals and payers on the appropriate use of therapy.
We have a fantastic opportunity for a Data Engineer to support the build of AI capabilities across Inizio Medical.
Key Responsibilities
1. Build scalable and efficient data pipelines.
2. Design the Data Architecture (including data models, schemas, and data pipelines) to process complex data from a variety of data sources.
3. Build and maintain the CI/CD infrastructure to host and run data pipelines.
4. Build and maintain data APIs.
5. Setup, support, interact with and maintain AI components including generative and machine learning models.
6. Build mechanisms for monitoring the data quality accuracy to ensure the reliability and integrity of data.
7. Evaluate and make technical decisions on the most suitable data technology based on business needs (including security, costs etc).
8. Collaborate with Data Scientists, Data Analysts, Software development and other stakeholders to understand data requirements.
9. Work closely with System Admins and Infrastructure teams to effectively integrate data engineering platforms into wider group platforms.
10. Be cognizant of new and emerging technologies related to data engineering, be an active champion of data engineering.
11. Monitor and optimise performance of data systems, troubleshoot issues, and implement solutions to improve efficiency and reliability.
12. A strong proficiency in Python.
13. Experience working with Generative AI models, their deployment and orchestration.
14. A solid understanding of database technologies and modelling techniques, including relational databases and NoSQL databases.
15. Experience with setting and managing Databricks environments.
16. Competent working with Spark.
17. Solid understanding of data warehousing modelling techniques.
18. Competent with setting up CI/CD / DevOps pipelines.
19. Experience with the cloud platforms Azure and AWS and their associated data technologies is essential.
20. Experience and understanding of graph technologies and modelling techniques is desirable.
21. Experience with GCP and Scala is desirable.
22. Excellent communication skills, capable of explaining complex data/technical concepts to stakeholders with varying levels of technical awareness.
23. Ability to work collaboratively.
In addition to a great compensation and benefits package including private medical insurance and a company pension, we are happy to talk dynamic working. We are also known for our friendly and informal working environment and offer excellent opportunities for career and personal development.
Don't meet every job requirement? That's okay! Our company is dedicated to building a diverse, inclusive, and authentic workplace. If you're excited about this role, but your experience doesn't perfectly fit every qualification, we encourage you to apply anyway. You may be just the right person for this role or others.
#J-18808-Ljbffr