Loading

Quipoin Menu

Learn • Practice • Grow

deep-learning / Pooling Layers
tutorial

Pooling Layers

Pooling layers reduce the spatial size of feature maps, decreasing computation and making the network more robust to small translations. Max pooling is the most common.

Max Pooling

Take a 2×2 window with stride 2, output the maximum value in each window. This reduces height and width by half, keeping the strongest activations.
Input (4x4) Max Pooling (2x2, stride 2) → Output (2x2)
[1 2 3 4] → [max(1,2,5,6)=6, max(3,4,7,8)=8]
[5 6 7 8] [max(9,10,13,14)=14, max(11,12,15,16)=16]
...

Average Pooling

Outputs the average value in each window. Less common than max pooling but used in some architectures (e.g., global average pooling at the end).

Why Pooling Helps

  • Reduces spatial dimensions → fewer parameters and less computation.
  • Introduces translation invariance: small shifts won't change the max value.
  • Helps control overfitting.

Global Pooling

Global average pooling takes the average of each entire feature map, producing a 1D vector. Often used before the final classification layer to replace flatten + dense, reducing parameters dramatically.


Two Minute Drill
  • Max pooling takes maximum in each window, reduces size.
  • Average pooling takes mean.
  • Pooling adds translation invariance and reduces computation.
  • Global pooling replaces flatten + dense at the end.

Need more clarification?

Drop us an email at career@quipoinfotech.com