Q1. Scenario: You are optimizing delivery routes. The cost function C(x) = x^2 - 4x + 5 represents fuel cost as a function of speed x (in km/h). Find the speed that minimizes fuel cost using derivative.
Take derivative: C'(x) = 2x - 4. Set to zero: 2x - 4 = 0 → x = 2 km/h. Second derivative C''(x)=2 > 0, so it's a minimum. The derivative tells the rate of change of cost with speed; at optimum, the rate is zero. This is classic optimization in machine learning (gradient descent).
Q2. Scenario: In a temperature model, the position of a moving sensor is s(t)=t^3 - 6t^2 + 9t+2 meters. Find when the velocity is zero and determine if it's accelerating or decelerating at those times.
Velocity v(t)=s'(t)=3t^2-12t+9 = 0 → t=1 and t=3. Acceleration a(t)=s''(t)=6t-12. At t=1, a=-6 (decelerating); at t=3, a=+6 (accelerating). This shows how derivatives describe motion; neural networks use similar concepts of gradients to update weights.
Q3. Scenario: You have a function f(x)=e^x representing compound interest. What is the instantaneous rate of change at x=0? Why is this special?
f'(x)=e^x, so at x=0, f'(0)=1. The derivative equals the function value at every point. In machine learning, the sigmoid and softmax derivatives are used for backpropagation. The exponential function appears in probability distributions (softmax) and its derivative is essential for gradient calculations.
Q4. Scenario: In a physics model, the potential energy is U(x) = x^4 - 4x^2. Find the equilibrium points (where force F = -dU/dx = 0) and classify them as stable or unstable.
dU/dx = 4x^3 - 8x = 4x(x^2-2)=0 → x=0, ±√2. Second derivative: d2U/dx2 = 12x^2-8. At x=0 -> -8 (unstable maximum), at x=±√2 -> 16 (stable minima). Optimization in machine learning seeks minima of cost functions; second derivatives inform curvature and convergence speed (Newton's method).
Q5. Scenario: A company's profit is P(q) = -0.5q^2 + 20q - 50 (q in thousands). What quantity maximizes profit? What is the maximum profit?
P'(q)= -q+20=0 → q=20 thousand units. Maximum profit = P(20)= -0.5(400)+400-50= -200+350=150 thousand dollars. This linear quadratic optimization is analogous to training linear regression with squared error loss, where the derivative gives the normal equation solution.
