Responsibilities: 1 year work experience in Data Bricks (mandatory) • Design, construct, and maintain scalable data pipelines. • Develop and manage ETL processes. • Ensure data security and compliance with regulations. • Collaborate with data scientists to ensure data quality. Experience Level: 5 years implementing Spark and Big Data solutions. Key Skillsets: • Database Management proficiency in SQL and NoSQL databases. • ETL Tools expertise with Apache Kafka, Spark, Airflow. • Strong programming skills in Python, Java, or Scala. • Familiarity with Big Data Technologies like Hadoop and Spark. • Expertise in AWS data services.