# LINEAR REGRESSION

--

**HOW DO WE DEFINE IT?**

Linear regression is a type of statistical analysis used to predict the relationship between two variables. It assumes a linear relationship between the independent variable and the dependent variable, and aims to find the best-fitting line that describes the relationship.

Linear regression is commonly used in many fields, including economics, finance, and social sciences, to analyze and predict trends in data.

## SIMPLE LINEAR REGRESSION

In a simple linear regression, there is one independent variable and one dependent variable. The model estimates the slope and intercept of the line of best fit, which represents the relationship between the variables. The slope represents the change in the dependent variable for each unit change in the independent variable, while the intercept represents the predicted value of the dependent variable when the independent variable is zero.

Linear regression shows the linear relationship between the independent(predictor) variable i.e. X-axis and the dependent(output) variable i.e. Y-axis, called linear regression*. *If there is a single input variable **X**(independent variable), such linear regression is called ** simple linear regression**.

## HOW TO CALCULATE BEST FIT LINE ?

To calculate best-fit line linear regression uses a traditional slope-intercept form which is given below

**Yi = β0 + β1*Xi**

where Yi = Dependent variable, **β0** = constant/Intercept, **β1** = Slope/Intercept, **Xi** = Independent variable.

This algorithm explains the linear relationship between the dependent(output) variable y and the independent(predictor) variable X using a straight line Y= B0 + B1* X.

In regression, the difference between the observed value of the dependent variable(Y**i**) and the predicted value(**predicted**) is called the residuals.

**εi = Ypredicted** — Y**i**

**where Ypredicted = B0 + B1*Xi**

# Cost Function for Linear Regression

The cost function helps to work out the optimal values for B0 and B1, which provides the best fit line for the data points.

In Linear Regression, generally **Mean Squared Error (MSE)** cost function is used, which is the average of squared error that occurred between the Y**predicted** and Y**i**.

We calculate MSE using simple linear equation y=mx+b:

# Gradient Descent for Linear Regression

Gradient Descent is one of the optimization algorithms that optimize the cost function(objective function) to reach the optimal minimal solution. To find the optimum solution we need to reduce the cost function(MSE) for all data points. This is done by updating the values of B0 and B1 iteratively until we get an optimal solution.

A regression model optimizes the gradient descent algorithm to update the coefficients of the line by reducing the cost function by randomly selecting coefficient values and then iteratively updating the values to reach the minimum cost function.

# Evaluation Metrics for Linear Regression

The strength of any linear regression model can be assessed using various evaluation metrics. These evaluation metrics usually provide a measure of how well the observed outputs are being generated by the model.

The most used metrics are,

- Coefficient of Determination or R-Squared (R2)
- Root Mean Squared Error (RSME) and Residual Standard Error (RSE)

# Coefficient of Determination or R-Squared (R2)

R-Squared is a number that explains the amount of variation that is explained/captured by the developed model. It always ranges between 0 & 1 . Overall, the higher the value of R-squared, the better the model fits the data.

Mathematically it can be represented as,

**R^2 = 1 — ( RSS/TSS )**

**Residual sum of Squares (RSS)**is defined as the sum of squares of the residual for each data point in the plot/data. It is the measure of the difference between the expected and the actual observed output.

**Total Sum of Squares (TSS)** is defined as the sum of errors of the data points from the mean of the response variable.

The significance of R-squared is shown by the following figures,

# Root Mean Squared Error

The Root Mean Squared Error is the square root of the variance of the residuals. It specifies the absolute fit of the model to the data i.e. how close the observed data points are to the predicted values. Mathematically it can be represented as,

To make this estimate unbiased, one has to divide the sum of the squared residuals by the **degrees of freedom** rather than the total number of data points in the model. This term is then called the **Residual Standard Error(RSE)**. Mathematically it can be represented as,

R-squared is a better measure than RSME. Because the value of Root Mean Squared Error depends on the units of the variables (i.e. it is not a normalized measure), it can change with the change in the unit of the variables.

# Assumptions of Linear Regression

Regression is a parametric approach, which means that it makes assumptions about the data for the purpose of analysis. For successful regression analysis, it’s essential to validate the following assumptions.

**Linearity of residuals**: There needs to be a linear relationship between the dependent variable and independent variable(s).

*2. ***Independence of residuals: **The error terms should not be dependent on one another (like in time-series data wherein the next value is dependent on the previous one). There should be no correlation between the residual terms. The absence of this phenomenon is known as **Autocorrelation.**

There should not be any visible patterns in the error terms.

**3. Normal distribution of residuals:** The mean of residuals should follow a normal distribution with a mean equal to zero or close to zero. This is done in order to check whether the selected line is actually the line of best fit or not.

If the error terms are non-normally distributed, suggests that there are a few unusual data points that must be studied closely to make a better model.

**4. The equal variance of residuals:** The error terms must have constant variance. This phenomenon is known as **Homoscedasticity**.

The presence of non-constant variance in the error terms is referred to as **Heteroscedasticity**. Generally, non-constant variance arises in the presence of *outliers or extreme leverage values*.

In the next article we will apply linear regression on our data set.

If you find this article , show your appreciation by liking and sharing this article.