Loading

Quipoin Menu

Learn • Practice • Grow

deep-learning / Loss Functions
tutorial

Loss Functions

A loss function measures how far the network's prediction is from the true target. During training, we minimize this loss by adjusting weights and biases.

Loss = error between predicted ŷ and true y. Lower loss = better model.

Mean Squared Error (MSE) – Regression

Average of squared differences. Penalizes large errors heavily.
MSE = (1/n) Σ (y_i - ŷ_i)²
Use for regression (predicting continuous values).

Mean Absolute Error (MAE) – Regression

Average of absolute differences. Less sensitive to outliers than MSE.
MAE = (1/n) Σ |y_i - ŷ_i|

Binary Cross‑Entropy – Binary Classification

Measures how well predicted probabilities match binary labels (0 or 1).
BCE = - [y log(ŷ) + (1-y) log(1-ŷ)]

Categorical Cross‑Entropy – Multi‑Class Classification

Used with softmax output. Compares predicted probability distribution to one‑hot encoded true labels.
CCE = - Σ y_i log(ŷ_i)

Choosing the Right Loss

  • Regression → MSE or MAE.
  • Binary classification → Binary Cross‑Entropy.
  • Multi‑class classification → Categorical Cross‑Entropy.


Two Minute Drill
  • Loss function quantifies prediction error.
  • MSE/MAE for regression.
  • Binary Cross‑Entropy for binary classification.
  • Categorical Cross‑Entropy for multi‑class classification.

Need more clarification?

Drop us an email at career@quipoinfotech.com