Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

18_deep_neural_network #29

Open
wants to merge 16 commits into
base: master
Choose a base branch
from
Open

Conversation

dariusamiri
Copy link

@@ -0,0 +1,201 @@
# Deep Neural Networks

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Try to have a different format for title and write authors' names in your lecture note

Deep learning is a subfield of machine learning that deals with algorithms inspired by the structure and function of the brain. Deep learning is a subset of machine learning, which is a part of artificial intelligence (AI).
![](https://i.imgur.com/qhjJzDb.png)

CNN's are models to solve deep learning problems. Suppose that you have high-dimensional inputs such as images or videos. If we want to use MLPs, 2 (or more) dimensional inputs need to be converted to 1-dimensional vectors. This conversion increases the number of trainable parameters exponentially. Also, one important thing in these data is locality, it means that for example in an image, you can find features in near pixels (for examples corners and edges) but, far pixels can't give you efficient features. The solution for solving these problems is using CNNs.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

CNNs

Deep learning is a subfield of machine learning that deals with algorithms inspired by the structure and function of the brain. Deep learning is a subset of machine learning, which is a part of artificial intelligence (AI).
![](https://i.imgur.com/qhjJzDb.png)

CNN's are models to solve deep learning problems. Suppose that you have high-dimensional inputs such as images or videos. If we want to use MLPs, 2 (or more) dimensional inputs need to be converted to 1-dimensional vectors. This conversion increases the number of trainable parameters exponentially. Also, one important thing in these data is locality, it means that for example in an image, you can find features in near pixels (for examples corners and edges) but, far pixels can't give you efficient features. The solution for solving these problems is using CNNs.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

find features in adjacent pixels

Deep learning is a subfield of machine learning that deals with algorithms inspired by the structure and function of the brain. Deep learning is a subset of machine learning, which is a part of artificial intelligence (AI).
![](https://i.imgur.com/qhjJzDb.png)

CNN's are models to solve deep learning problems. Suppose that you have high-dimensional inputs such as images or videos. If we want to use MLPs, 2 (or more) dimensional inputs need to be converted to 1-dimensional vectors. This conversion increases the number of trainable parameters exponentially. Also, one important thing in these data is locality, it means that for example in an image, you can find features in near pixels (for examples corners and edges) but, far pixels can't give you efficient features. The solution for solving these problems is using CNNs.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

for examples -> for example

Deep learning is a subfield of machine learning that deals with algorithms inspired by the structure and function of the brain. Deep learning is a subset of machine learning, which is a part of artificial intelligence (AI).
![](https://i.imgur.com/qhjJzDb.png)

CNN's are models to solve deep learning problems. Suppose that you have high-dimensional inputs such as images or videos. If we want to use MLPs, 2 (or more) dimensional inputs need to be converted to 1-dimensional vectors. This conversion increases the number of trainable parameters exponentially. Also, one important thing in these data is locality, it means that for example in an image, you can find features in near pixels (for examples corners and edges) but, far pixels can't give you efficient features. The solution for solving these problems is using CNNs.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

far pixels -> distant pixels

A CNN architecture is formed by a stack of distinct layers that transform the input volume into an output volume through a differentiable function. A few distinct types of layers are commonly used:

* Fully Connected Layer
* Convolutional layer

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

layer -> Layer (to be synced with other bullets)

@@ -0,0 +1,201 @@
# Deep Neural Networks

## Table of Content

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The order of contents doesn't match with the table. Revise them

![](https://i.imgur.com/3nItEgk.png)

## Conv Layer
This layer is the main difference between CNNs and MLPs. Convolution in the word refers to two operators between two functions. In mathematics convolution define as below:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

define -> is defined



Here we’ll not talk about details, but convolutional layers are somehow enabling convolution operator on sub-matrices of the image. These layers have formed from some kernel with the same height, width, and depth. The number of these kernels is equal to the depth of the output. Also, the depth of each kernel must be equal to the depth of input. For example, if you have RGB data, your first convolutional layer kernels depth must be 3.
In the context of a convolutional neural network, convolution is a linear operation that involves the multiplication of a set of weights with the input. A convolution layer has formed by 1 or more of these operations that each of them called a kernel. All kernels have the same height, width, and depth. To find the output of the layer, we put the first kernel on the top-right of the input and calculate the output of the kernel, and put it as the first cell of a matrix. After that, we move it to right and calculate again, and put the result in the second cell. When we receive to end of columns, we move the kernel down. we do this until we rich to the end of the image. We do this for all kernels and this is how we make the output of the convolutional layer.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When we reach the last column



Here we’ll not talk about details, but convolutional layers are somehow enabling convolution operator on sub-matrices of the image. These layers have formed from some kernel with the same height, width, and depth. The number of these kernels is equal to the depth of the output. Also, the depth of each kernel must be equal to the depth of input. For example, if you have RGB data, your first convolutional layer kernels depth must be 3.
In the context of a convolutional neural network, convolution is a linear operation that involves the multiplication of a set of weights with the input. A convolution layer has formed by 1 or more of these operations that each of them called a kernel. All kernels have the same height, width, and depth. To find the output of the layer, we put the first kernel on the top-right of the input and calculate the output of the kernel, and put it as the first cell of a matrix. After that, we move it to right and calculate again, and put the result in the second cell. When we receive to end of columns, we move the kernel down. we do this until we rich to the end of the image. We do this for all kernels and this is how we make the output of the convolutional layer.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we -> We
rich -> reach



Here we’ll not talk about details, but convolutional layers are somehow enabling convolution operator on sub-matrices of the image. These layers have formed from some kernel with the same height, width, and depth. The number of these kernels is equal to the depth of the output. Also, the depth of each kernel must be equal to the depth of input. For example, if you have RGB data, your first convolutional layer kernels depth must be 3.
In the context of a convolutional neural network, convolution is a linear operation that involves the multiplication of a set of weights with the input. A convolution layer has formed by 1 or more of these operations that each of them called a kernel. All kernels have the same height, width, and depth. To find the output of the layer, we put the first kernel on the top-right of the input and calculate the output of the kernel, and put it as the first cell of a matrix. After that, we move it to right and calculate again, and put the result in the second cell. When we receive to end of columns, we move the kernel down. we do this until we rich to the end of the image. We do this for all kernels and this is how we make the output of the convolutional layer.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Using so many "we"s in the context is a bad smell!


## Pooling

Similar to the Convolutional Layer, the Pooling layer is responsible for reducing the spatial size of the Convolved Feature.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

convolved feature

## Pooling

Similar to the Convolutional Layer, the Pooling layer is responsible for reducing the spatial size of the Convolved Feature.
While a lot of information is lost in the pooling layer, it also has a number of benefits to the Convolutional neural network. They help to reduce complexity, improve efficiency, and limit the risk of overfitting.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if you want to use capital case, use it for all words of a phrase: Convolutional Neural Network

There are two types of Pooling:

1. Max Pooling: it returns the maximum value from the portion of the image covered by the Kernel. and also performs as a Noise Suppressant. It discards the noisy activations altogether and also performs de-noising along with dimensionality reduction.
2. Average Pooling: it returns the average of all the values from the portion of the image covered by the Kernel. and simply performs dimensionality reduction as a noise suppressing mechanism.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

by the kernel, and simply


## Padding

As you see, after applying convolutional layers, the size of the feature map is always smaller than the input, we have to do something to prevent our feature map from shrinking. This is where we use padding. Layers of zero-value pixels are added to surround the input with zeros so that our feature map will not shrink. By padding, we can control the shrinking of our inputs.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

smaller than the input. We have to ...

## Padding

As you see, after applying convolutional layers, the size of the feature map is always smaller than the input, we have to do something to prevent our feature map from shrinking. This is where we use padding. Layers of zero-value pixels are added to surround the input with zeros so that our feature map will not shrink. By padding, we can control the shrinking of our inputs.
Different padding mode is:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Different padding modes are:

As you see, after applying convolutional layers, the size of the feature map is always smaller than the input, we have to do something to prevent our feature map from shrinking. This is where we use padding. Layers of zero-value pixels are added to surround the input with zeros so that our feature map will not shrink. By padding, we can control the shrinking of our inputs.
Different padding mode is:
* zeros(Default)
* reflect

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Explain or hyperlink what this padding is

Different padding mode is:
* zeros(Default)
* reflect
* replicate or circular

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Explain or hyperlink what this padding is



## Stride
As we said before, when you're applying a kernel to the image, you have to move the kernel during the image. But sometimes you prefer to not move one pixel every time and move the kernel more than one pixel. This is stride. The stride specifies how many kernels have to move each time.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

during -> along



## Stride
As we said before, when you're applying a kernel to the image, you have to move the kernel during the image. But sometimes you prefer to not move one pixel every time and move the kernel more than one pixel. This is stride. The stride specifies how many kernels have to move each time.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

to not move -> not to move

## Table of Content

- [Introduction](#introduction)
- [CNN Architecture](#CNN-Architecture)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

All mentioned layers and functions can be considered as subsections for this section

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This seems unresolved yet.
I mean this structure:

  • CNN Architecture
    - Fully Connected Layers
    - Conv Layer
    ...

Copy link

@nimajam41 nimajam41 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

1- Review and revise your English mistakes
2- Edit Table of Contents

@dariusamiri
Copy link
Author

dariusamiri commented Jan 15, 2022 via email

@dariusamiri
Copy link
Author

dariusamiri commented Jan 15, 2022 via email

It sweeps a filter across the entire input but that does not have any weights. Instead, the kernel applies an aggregation function to the values within the receptive field, populating the output array.
There are two types of Pooling:

1. Max Pooling: it returns the maximum value from the portion of the image covered by the Kernel. and also performs as a Noise Suppressant. It discards the noisy activations altogether and also performs de-noising along with dimensionality reduction.
Copy link

@nimajam41 nimajam41 Feb 3, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Start sentences with capital letters

Copy link

@nimajam41 nimajam41 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for your revision. Most of the issues have been resolved. Try to make the Table of Contents hierarchical, explain what different paddings are, and edit some wrong usages of grammar.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants