Please note that this role requires eligibility for BPSS clearance
A variety of soft skills and experience may be required for the following role Please ensure you check the overview below carefully.
Tasks and Responsibilities
Engineering:
Ingestion configuration
Wring python/pyspark and spark SQL code for validation/curation in notebook
Create data integration test cases
Implement or amend worker pipelines
Implement Data validation /curation rules
Convert data model data mapping the technical data mapping
Implement Integrations
Data migration implementation and execution
Data analysis (profiling etc)
Azure devops/github configurations for ADF and data bricks code
Our ideal candidate
Strong experience in Designing and delivering Azure based data platform solutions ,technologies including Azure ADF and Data bricks, Azure Functions, App Service ,Logic app, AKS, Azure app servie, Webapp
Good knowledge in real time streaming applications preferably with experience Kafka Real time messaging or Azure function ,Azure service bus.
Spark processing and performance tuning
File formats partitioning for e.g. Parquet,JSON,XML,CSV
Azure Devops/git hub
Hands on experience in at least one of Python with knowledge of the others
Experience of synchronous and asynchronous interface approaches
Knowledge in data modeling (3NF/Dimensional modeling/Data Vault2)
Work experience in agile delivery
Able to provide comprehensive documentation
Able to set and manage realistic expectations for timescales, costs, benefits and measures for success
Nice to Have
* Experience with integration and implementation of data cataloging tool like Azure Purview/Collabra
* Experience in implementing and integrating visualizations tools like Power BI/Tableau etc
* Experience C# application development (ASP.net MVP)