The primary idea behind this project was understanding and visualizing neural networks. This was done as part of my 9 weeks long research internship at the Department of Computer Science, Habib University in the Summer of 2019 under the supervision of Dr. Musabbir Majeed and Dr. Abdul Samad.
The following poster highlights some of the things that I did as part of this research internship:
This browser does not support PDFs. Please view it here.
I was a novice in the world of Machine/Deep Learning at the start, so we started from the absolute basics. Built simple models such as linear regression, logistic regression, and multi-layered feedforward networks, all from scratch using only numpy. The emphasis was on understanding the backpropagation algorithm, and being able to explain the results. We built the same models using tensorflow as well as on Keras to match our results. We also looked into how deep learning frameworks such as TF make it's computations using computational graphs and explored tensorboard as well. The ultimate aim was to be able to build a visualization tool as well as a debugging tool on top of tensorflow.
To understand the models better, we then moved to convolutional neural network. Here we implemented some state-of-the-art CNNs such as VGG and AlexNet for visualization purposes. We then explored and implemented multiple visualization techniques, such as the deconvolution technique, and were able to replicate the results of the paper which proposed them.
Python, numpy, matplotlib, Tensorflow, Tensorboard, Keras, Google Colab, Slack, Trello and Git. Some other tools were also used apart from these, but they were just for the sack of exploration alone and hence my exposure to them was very limited.