Loading

Quipoin Menu

Learn • Practice • Grow

deep-learning / PyTorch Tensors
tutorial

PyTorch Tensors

PyTorch uses tensors (multi‑dimensional arrays) similar to NumPy, but with GPU acceleration and automatic differentiation. Understanding tensors is the first step to using PyTorch.

Creating Tensors

import torch

# From list
a = torch.tensor([1,2,3])
# Random
b = torch.rand(3,4)
# Zeros
c = torch.zeros(2,3)
# Like NumPy
d = torch.from_numpy(numpy_array)

Tensor Operations

Most NumPy operations work on tensors: addition, multiplication, reshaping, indexing.
x = torch.rand(5,3)
print(x.shape)
print(x + x)
print(x.mean(dim=0))

Moving to GPU

if torch.cuda.is_available():
device = torch.device('cuda')
x = x.to(device)

Autograd (Automatic Differentiation)

PyTorch tracks operations on tensors with `requires_grad=True`. Call `.backward()` to compute gradients.
x = torch.tensor([2.0], requires_grad=True)
y = x**2
y.backward()
print(x.grad) # 2*x = 4.0


Two Minute Drill
  • Tensors are multi‑dimensional arrays with GPU support.
  • Create via `torch.tensor()`, `torch.rand()`, `torch.zeros()`.
  • Move to GPU with `.to(device)`.
  • Autograd computes gradients automatically.

Need more clarification?

Drop us an email at career@quipoinfotech.com