Responsibilities:
Design and implement data pipelines for AI model training.
Optimize data workflows to support machine learning projects.
Collaborate with data scientists and engineers for seamless integration.
Utilize cloud platforms to manage large-scale datasets.
Communicate data processing insights to teams effectively.
Requirements:
Proficiency in Python, ETL frameworks, and data pipeline tools.
Experience with cloud platforms like AWS, Azure, or Google Cloud.
Knowledge of big data tools (e.g., Spark, Hadoop) is a plus.
Strong problem-solving and communication skills.
Bachelor’s or Master’s in Data Engineering, AI, or related field.