Autograd MLP: Neural Network with Automatic Differentiation
Autograd MLP: Neural Network with Automatic Differentiation
Use your Value class to build and train a small multi-layer perceptron (MLP). This is the capstone for the autograd module.
What you are building
1) Neuron
- Parameters: weights
w(list ofValue), biasb(Value) - Activation: "tanh", "relu", or "linear"
class Neuron:
def __init__(self, nin: int, activation: str = "tanh"):
# initialize w with random Values in [-1, 1]
# initialize b = Value(0.0)
pass
def __call__(self, x: List[Value]) -> Value:
# act = sum(w_i * x_i) + b
# apply activation
pass
def parameters(self) -> List[Value]:
pass
2) Layer
- A list of
noutneurons
class Layer:
def __init__(self, nin: int, nout: int, activation: str = "tanh"):
pass
def __call__(self, x: List[Value]) -> List[Value]:
pass
def parameters(self) -> List[Value]:
pass
3) MLP
- Stack multiple layers
- Default activations: tanh for hidden layers, linear for output
- Accept input as
List[float]orList[Value]
class MLP:
def __init__(self, nin: int, nouts: List[int], activations: List[str] | None = None):
pass
def __call__(self, x: List[float] | List[Value]) -> Value | List[Value]:
pass
def parameters(self) -> List[Value]:
pass
4) train(model, X, y, epochs, lr)
- Train with mean squared error (MSE)
- Return a list of loss values (floats), one per epoch
def train(model, X, y, epochs, lr) -> List[float]:
# preds = [model(xi) for xi in X]
# loss = mean((pred - target)^2)
# backward + SGD update
pass
Notes
- Use
+=in_backwardpaths so gradients accumulate correctly. - Zero parameter gradients before each backward.
- If a layer has one output, return a single
Valueinstead of a length-1 list.
Example
model = MLP(2, [4, 1])
losses = train(model, X=[[0,0],[0,1],[1,0],[1,1]], y=[0,1,1,0], epochs=50, lr=0.1)
print(losses[-1])
Run tests to see results
No issues detected