Forward Propagation
Forward propagation is the process of passing input data through the network to compute an output. It involves matrix multiplications, adding biases, and applying activation functions layer by layer.
Forward propagation = input → hidden layers → output. Each layer transforms the data.
Step‑by‑Step for a Simple Network
Consider a network with input layer (2 neurons), one hidden layer (3 neurons), output layer (1 neuron).
1. Input vector x (size 2).
2. Hidden layer pre‑activation: z¹ = W¹·x + b¹ (matrix multiplication + bias).
3. Hidden layer activation: a¹ = activation(z¹) (e.g., ReLU).
4. Output layer pre‑activation: z² = W²·a¹ + b².
5. Output: ŷ = activation(z²) (e.g., sigmoid for binary classification).
Layer 1: z1 = W1·x + b1 → a1 = ReLU(z1)
Layer 2: z2 = W2·a1 + b2 → ŷ = sigmoid(z2)Why Multiple Layers?
Each layer learns a different level of abstraction. In image recognition: first layer detects edges, second layer shapes, third layer objects. This hierarchy is key to deep learning's power.
Matrix Multiplication Advantage
Deep learning frameworks use vectorized operations (matrix multiplication) to process batches of inputs simultaneously, making training efficient on GPUs.
Two Minute Drill
- Forward propagation computes output from input.
- Each layer: linear transformation (W·x+b) → activation.
- Hierarchy: edges → shapes → objects.
- Vectorized operations enable batch processing on GPUs.
Need more clarification?
Drop us an email at career@quipoinfotech.com
