Gradient Descent

Gradient Descent

Gradient descent is an optimization algorithm used to minimize the cost function of a machine learning model. The cost function measures the difference between the predicted output and the actual output of the model. The goal of gradient descent is to find the set of model parameters (weights) that minimize the cost function.

The gradient descent algorithm works by iteratively adjusting the model parameters in the direction of the negative gradient of the cost function. This means that the algorithm moves the parameters in the direction of steepest descent, or the direction of the fastest decrease in the cost function.

In each iteration of the algorithm, the model parameters are updated by subtracting the gradient of the cost function with respect to the parameters, multiplied by a learning rate hyperparameter. The learning rate determines the step size of the algorithm and can be set to a small value to ensure convergence of the algorithm.

The gradient descent algorithm continues to update the model parameters until the cost function reaches a minimum or convergence criterion is met. There are several variants of gradient descent, such as stochastic gradient descent (SGD), which updates the model parameters using a randomly selected subset of the training data in each iteration to reduce the computational cost of the algorithm.

Gradient Descent python code


import numpy as np

x = np.array([1,2,3,4,5])
y = np.array([5,7,9,11,13])

m_init = b_init = 0
iterations = 10000
n = len(x)
learning_rate = 0.08
for i in range(iterations):
    y_predicted = m_init * x + b_init
    cost = (1/n) * sum([val**2 for val in (y-y_predicted)])
    md = -(2/n)*sum(x*(y-y_predicted))
    bd = -(2/n)*sum(y-y_predicted)
    m_init = m_init - learning_rate * md
    b_init = b_init - learning_rate * bd
    print ("m {:.3f}, b {:.3f}, cost {:.3f} iteration {}".format(m_init,b_init,cost, i))

Advertisement

Advertisement