DSA Studio
Search
Home
Sign in
Regularization Checkpoint
Overfitting, L1, L2 regularization.
1. Overfitting means the model:
Memorizes training data, fails on new data
Performs poorly on all data
Is too simple
Needs more parameters
2. L2 regularization adds to the loss:
Sum of squared weights
Sum of absolute weights
Number of weights
Maximum weight
3. L1 regularization tends to produce:
Sparse weights (some exactly zero)
All small weights
Large weights
Negative weights only
4. Name one technique besides regularization to prevent overfitting.
5. The regularization strength (lambda) that is too high causes:
Underfitting
Overfitting
Perfect fit
No change
Submit quiz
Auto-advance on pass