Categories
Machine Learning Study Notes

Machine Learning The Linear Hypothesis

    \[ \hat{y} = h_\theta(x) = \theta_0 + {\theta_1}x_1 \]

The very first formula I learned in machine learning (and the first time I tried writing in LaTeX!) So pretty cool, but what does it mean? This is an example of a univariate hypothesis. ‘Univariate’ is a fancy way of saying that I have one variable (let’s call it ‘x’ for now) that I am using to predict another variable (say ‘y’ for now).

\hat{y} is read as ‘predicted value of y’

h_\theta(x) is the hypothesis. It represents the formula that will be used to calculate ‘y’ for given values of ‘x’

{\theta_0} + {\theta_1}{x_1} is the actual equation for predicting ‘y’

I’ve forgotten a lot of the math I used to know, so it took awhile for me to comprehend that this is actually the formula for the slope of a line

    \[y = b + mx\]

Digging still deeper into the recesses of my mind, I remember that a line is just a set of x and y coordinates. If one of the paired coordinates is known, its mate can be found by solving this equation. With the linear hypothesis, ‘x’ is known and ‘y’ will be calculated.

Tying this back to machine learning, a machine can ‘learn’ to predict ‘y’ for new values of ‘x.’ It does this by finding the slope of the line that comes closest or best fits the initial data (also known as training data). Fitting the model requires two other pieces, gradient descent and the cost function. Check out this post for an overview of how all three pieces work together.

Leave a Reply

Your email address will not be published. Required fields are marked *