Categories
Machine Learning Study Notes

Machine Learning Parameters

    \begin{equation*} \theta = \{\theta_0, \theta_1, \ldots\ \theta_n\ \mid n \in \mathbb{Z}^+\} \end{equation*}

Time to introduce a vital concept in machine learning: parameters. Previously, I talked about ‘thetaZero’ and ‘thetaOne’ and their use in the linear hypothesis algorithm. The proper term for them isĀ  ‘parameters’ and they belong to a set ‘theta’ commonly notated as

\theta

It is best to think of ‘theta’ as \{ \theta_0, \theta_1, \ldots\ \theta_n\} where ‘n” is some positive integer. Each member of the set can have a different value. I only mention two members of the set because the linear hypothesis only has two parameters: the intercept of the line and the slope of the line. A different machine learning hypothesis might have more parameters than this.

A crucial concept related to parameters is that they change, but they are NOT considered variables or features in machine learning. This can be very confusing. I expect over time the differences will become clearer. At this point, I just want to clarify that parameters are all about how the shape of a model changes. A straight line can be placed anywhere on a graph, right? It might be very steep or very flat. But it will ALWAYS be a straight line. These things called parameters, in relation to the linear hypothesis, specify the location of the line and how steep its slope will be. Generally, parameters define the shape of a hypothesis.

Leave a Reply

Your email address will not be published. Required fields are marked *