Visualizing Deep Neural Networks

some insight into the magic of deep learning

Almost everyone is aware of some sort of idea of machine learning. People have seen the successes of deep learning. But, with that, there has been a loss in interpretation of models such as deep neural networks. In this project, I and Prateek Garg aim to understand some theory and implement some ideas to try to answer the question of interpretability.

The work can be found here.