This assignment focuses on making black-box models more interpretable and explainable. You will:

  1. Model Training: Train a complex model (e.g., neural network, random forest) on a provided dataset
  2. Global Interpretability: Analyze feature importance and model behavior across the dataset
  3. Local Explanations: Generate explanations for individual predictions using LIME and SHAP
  4. Comparison: Compare different explanation methods and their insights
  5. User Study: Design a simple user study to evaluate explanation quality

Learning Objectives:

  • Implement various explainability techniques
  • Compare and contrast different explanation methods
  • Evaluate the quality and usefulness of explanations
  • Consider human factors in explainability

Deliverables:

  • Implementation of explanation methods
  • Comparative analysis report
  • User study design and pilot results

Resources: