Responsibilities:
Conduct cutting-edge research in explainable AI (XAI) to improve model interpretability and transparency.
Develop algorithms and methods to explain complex machine learning models to non-technical stakeholders.
Collaborate with data scientists and AI engineers to integrate explainability into AI models.
Publish research papers and contribute to the AI community by sharing findings and advancements.
Design and implement frameworks for testing the explainability and fairness of AI models.
Requirements:
Strong background in machine learning, deep learning, and model interpretability techniques.
Experience with XAI frameworks such as LIME, SHAP, or other model-agnostic methods.
Proficiency in Python and machine learning libraries (e.g., TensorFlow, PyTorch).
Familiarity with the ethical and regulatory challenges of AI explainability.
PhD or Master’s in Computer Science, AI, Machine Learning, or a related field.