Gradient Descent from Scratch in 20 Lines
How do ML algorithms optimize the loss functions? This notebook implements Gradient descent from scratch in less 20 lines of code.
How do ML algorithms optimize the loss functions? This notebook implements Gradient descent from scratch in less 20 lines of code.