Skip to content

Commit

Permalink
Merge pull request #16 from dlmbl/cmm_edits
Browse files Browse the repository at this point in the history
Suggested Changes
  • Loading branch information
afoix authored Aug 19, 2024
2 parents 09c6e65 + ca7f24c commit 7fd2f36
Show file tree
Hide file tree
Showing 6 changed files with 606 additions and 435 deletions.
2 changes: 2 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
mnist/
fashion_mnist/
13 changes: 5 additions & 8 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,11 +8,11 @@ git submodule update --init --recursive 07_failure_modes
```

## Goal
In Exercise 7: Failure Modes and Limits of Deep Learning, we delve into understanding the limits and failure modes of neural networks, especially in the context of image classification. This exercise highlights how differences between tainted and clean training datasets as well as test datasets can affect the performance of neural networks in ways that we will try to understand. By tampering with image datasets and introducing extra visual information, the exercise aims to illustrate real-world scenarios where data collection inconsistencies can corrupt datasets. The goal is to investigate the internal reasoning of neural networks, and use tools like Integrated Gradients, which help in identifying crucial areas of an image that influence classification decisions.
In Exercise 7: Failure Modes and Limits of Deep Learning, we delve into understanding the limits and failure modes of neural networks in the context of image classification. By tampering with image datasets and introducing extra visual information, the exercise mimics real-world scenarios where data collection inconsistencies can corrupt datasets.

The exercise involves creating and training neural networks on both tainted and clean datasets, examining how these networks handle local and global data corruptions. We will visualize the network's performance through confusion matrices and interpret the attention maps generated by Integrated Gradients. Additionally, the exercise explores how denoising networks cope with domain changes by training a UNet model on noisy MNIST data and testing it on both similar and different datasets like FashionMNIST. Through these activities, participants are encouraged to think deeply about neural network behavior, discuss their findings in groups, and reflect on the impact of dataset inconsistencies on model performance.
The exercise examines how neural networks handle local and global data corruptions. We will reason about a classification network's performance through confusion matrices, and use tools like Integrated Gradients to identify areas of an image that influence classification decisions. Additionally, the exercise explores how denoising networks cope with domain changes by training a UNet model on noisy MNIST data and testing it on both similar and different datasets like FashionMNIST.

In a broader sense, this exercise helps participants recognize the importance of dataset quality and consistency in training robust neural networks. By exploring these failure modes, participants gain insights into the internal workings of neural networks and learn how to diagnose and mitigate potential issues. This understanding is crucial for developing more reliable machine learning models and ensuring their effective application in real-world scenarios where data inconsistencies are common.
Through these activities, participants are encouraged to think deeply about neural network behavior, discuss their findings in groups, and reflect on the impact of dataset inconsistencies on model performance and robustness. By exploring failure modes, participants gain insights into the internal workings of neural networks and learn how to diagnose and mitigate issues that are common in real-world scendarios.


## Methodology
Expand Down Expand Up @@ -73,8 +73,5 @@ Please run the setup script to create the environment for this exercise and down
source setup.sh
```

When you are ready to start the exercise, make sure you are in your base environment and then run jupyter lab.
```bash
mamba activate base
jupyter lab
```
When you are ready to start the exercise, open the `exercise.ipynb` file in VSCode
and select the `07-failure-modes` kernel
Loading

0 comments on commit 7fd2f36

Please sign in to comment.