Member-only story
Neural Network From Scratch: Hidden Layers
A look at hidden layers as we try to upgrade perceptrons to the multilayer neural network

In my first and second articles about neural networks, I was working with perceptrons, a single-layer neural network. And even though our AI was able to recognize simple patterns, it wasn’t possible to use it, for example, for object recognition on images. That’s why today we’ll talk about hidden layers and will try to upgrade perceptrons to the multilayer neural network.
Hidden Layers
Why do we need hidden layers? Perceptrons recognize simple patterns, and maybe if we add more learning iteration, they might learn how to recognize more complex patterns? Actually, no. Hidden layers allow for additional transformation of the input values, which allows for solving more complex problems.
Every hidden layer has inputs and outputs. Inputs and outputs have their own weights that go through the activation function and their own derivative calculation.
This is a visual representation of the neural network with hidden layers: