In the previous article we learned that neural networks look for the correlation between the inputs and the outputs of a training set. We also learned that based on the pattern, the weights will have an overall tendency to increase or decrease until the network predicts all the values correctly.
Tag: Machine Learning & Data
In previous articles, we learned how neural networks adjust their weights to improve the accuracy of their predictions using techniques like gradient descent.
In this article, we will take a look at the learning process using a more abstract perspective. We will discuss the correlation between inputs and outputs in
In the previous article the foundations for a generalized implementation of gradient descent. Namely, cases with multiple inputs and one output, and multiple outputs and one input.
In this article, we will continue our generalization efforts to come up with a version of gradient descent that works with any number
In the previous article, we learned about gradient descent with a simple 1-input/1-output network. In this article, we will learn how to generalize this technique for networks with any number of inputs and outputs.
We will concentrate on 3 different scenarios:
- Gradient descent with on NNs with multiple inputs
In the previous article, we learned about hot/cold learning.
We also learned that hot/cold learning has some problems: it's slow and prone to overshoot, so we need a better way of adjusting the weights.
A better approach should take into consideration how accurate our predictions are and adjust