Job Title: SC Cleared Azure Data Engineer - Government client - Fully Remote
Location: Fully Remote - UK Based
Salary/Rate: Up to £455 a day Inside IR35
Start Date: April / May
Job Type: 3 Month Contract (with scope to extend)
Company Introduction
We are looking for an SC Cleared Data Engineer to join our client in the Government Administration sector.
**Candidates applying for this role must hold active Security Clearance**
As a senior data engineer, you would be engaging with data leads, data scientists, analysts and users around the data space for data analytics, data insights development and implementation. Engage with business analysts, data scientists, project and delivery leads in analysing backlogs, defining/redefining metric tickets, implementation logic, data mapping, related tasks creation and estimations. A strong actioner of data standards for ETL purposes, data modelling, best practices and strive for its implementation.
Required Skills/Experience
* Strong in Azure data services like ADF, Synapse, SQL, ADB, etc.
* Strong in Databricks notebooks development for data ingestion, validation, transformation and metric build.
* Strong in PySpark and SQL.
* Strong in ADF pipeline development, data orchestration techniques, monitoring and troubleshooting.
* Strong in stored procedure development.
* Good knowledge in data modelling (dimensional) and Power BI reporting.
Job Responsibilities/Objectives
* Analyse raw data (mostly in Json format) for data parsing, schema evolution, data transformation towards metric development purpose.
* Analyse reporting/metric requirements from data engineering perspective for refinement, estimation, development and deployment.
* Closely work with analysts, data scientists to understand the business requirements, data sources and logic for metric development.
* Create normalised/dimensional data models based on the requirement.
* Translate and refine the notebooks and logics developed as part of prototype.
* Transform data from landing/staging/transformed to synapse dimensional model.
* Create notebooks in Databricks for incremental data load and transformation.
* Create stored procedures for data load and transformation in azure synapse dedicated pools.
* Create ADF pipelines for data orchestration across different data layers of Databricks and Synapse.
If you are interested in this opportunity, please apply now with your updated CV in Microsoft Word/PDF format.
Disclaimer
Notwithstanding any guidelines given to level of experience sought, we will consider candidates from outside this range if they can demonstrate the necessary competencies.
Square One is acting as both an employment agency and an employment business, and is an equal opportunities recruitment business. Square One embraces diversity and will treat everyone equally. Please see our website for our full diversity statement.
#J-18808-Ljbffr