Numerical Gradient
Numerical Gradient
Implement numerical gradient computation using finite differences. This technique approximates derivatives without symbolic calculus.
Functions to implement
1. numerical_derivative(f, x, h=1e-5)
Compute the derivative of a single-variable function at point x.
- Input: A function
f, a pointx, and step sizeh - Output: The approximate derivative f'(x)
- Use the central difference formula: (f(x+h) - f(x-h)) / (2h)
2. numerical_gradient(f, x, h=1e-5)
Compute the gradient of a multi-variable function at point x.
- Input: A function
fthat takes a list, a listx, and step sizeh - Output: A list of partial derivatives (the gradient)
- Compute partial derivative for each dimension
3. check_gradient(f, grad_f, x, h=1e-5, tolerance=1e-4)
Verify an analytical gradient against numerical gradient.
- Input: Function
f, gradient functiongrad_f, pointx, step sizeh, tolerance - Output: True if gradients match within tolerance, False otherwise
Examples
# Single variable
f = lambda x: x ** 2
numerical_derivative(f, 3.0) # ~6.0 (derivative of x^2 is 2x)
# Multi-variable
g = lambda x: x[0]**2 + x[1]**2
numerical_gradient(g, [3.0, 4.0]) # ~[6.0, 8.0]
# Check gradient
h = lambda x: x[0]**2 + x[1]**2
grad_h = lambda x: [2*x[0], 2*x[1]]
check_gradient(h, grad_h, [3.0, 4.0]) # True
Notes
- Use central difference for better accuracy
- Small h is needed for precision, but too small causes floating point errors
- Numerical gradients are slow but useful for debugging
Run tests to see results
No issues detected