Job Description:
As a Data Engineer with Iceberg experience, you will play a crucial role in the design, development, and maintenance of our data infrastructure. Your work will empower data-driven decision-making and contribute to the success of our data-driven initiatives.
Key Responsibilities:
* Data Integration: Develop and maintain data pipelines to extract, transform, and load (ETL) data from various sources into AWS data stores for both batch and streaming data ingestion.
* AWS Expertise: Utilize your expertise in AWS services such as Amazon EMR, S3, AWS Glue, Amazon Redshift, AWS Lambda, and more to build and optimize data solutions.
* Data Modeling: Design and implement data models to support analytical and reporting needs, ensuring data accuracy and performance.
* Data Quality: Implement data quality and data governance best practices to maintain data integrity.
* Performance Optimization: Identify and resolve performance bottlenecks in data pipelines and storage solutions to ensure optimal performance.
* Documentation: Create and maintain comprehensive documentation for data pipelines, architecture, and best practices.
* Collaboration: Collaborate with cross-functional teams, including data scientists and analysts, to understand data requirements and deliver high-quality data solutions.
* Automation: Implement automation processes and best practices to streamline data workflows and reduce manual interventions.
* Experience working with bigdata ACID file formats to build delta lake, particularly with Iceberg file formats and loading methods of Iceberg.
* Good knowledge on Iceberg functionalities to use the delta features to identify the changed records, optimization and housekeeping on Iceberg tables in the data lake.
Must have: AWS, ETL, EMR, GLUE, Spark/Scala, Java, Python,
Good to have: Cloudera – Spark, Hive, Impala, HDFS, Informatica PowerCenter, Informatica DQ/DG, Snowflake Erwin