Fundamentals of Deep Learning

This blog is intended just to trigger interest in you to learn Deep Learning.

Use the keywords you find here to dig deeper.

What maps the inputs of a node to its output? Activation Function. What makes them important is they are not typically linear functions.

Training a model implies to find the weights that minimizes the loss measured by a loss function. Loss is calculated at end of each epoch to guage how far or close the output of the model is to the desired output. Hence the objective is to minimize the loss as much as possible using an optimizer. One most common optimizer is Stochastic Gradient Descent. The optimizer achieves the objective mentioned by finding the optimal values of the weights. That’s how model learns. Learning to get values of the optimal weights! The model learns using the training data. We say one epoch is completed when all the training points are passed through the neural network.

Updating the weights involves: Learning Rate & Gradient.

These two are important Hyperparameters in neural networks one chooses before training a model.

FAQs:

What’s the difference between epoch and iteration?

What’s batch size?

How do you choose an activation function?

What’s overfitting and underfitting?

What are the different hyperparameters in training a neural network and how do you choose the optimal values for them?

Does a neural network always outperform a traditional machine learning algorithm?

What are the terms in neural network that mimics the biological brain of a Human?

Whats’s backpropogation?

What’s the vanishing and exploding gradient imply?

Can a Neural Network perform unsupervised task like clustering?

What are the most common types of Neural Networks?

Writers: Piyush Kulkarni (Data Scientist)

Our objective is to be valued leader in providing quality, affordable instruction in studies via practical training, workshops, & experimental learning.