Linear regression, cost function and gradient descent sum up the first two weeks of Professor Ng’s Machine Learning class on Coursera. Since I am not aspiring to be a mathematician, I am going to refer to them as algorithms. Each of these algorithms has a particular place when designing a machine learning solution.
Linear regression, based on linear algebra and statistics, models the relationship between variables in a set of data. The cost function, based on statistics, measures how much the model varies from an actual set of data. The gradient descent, based on calculus, finds the smallest possible value in a cost function that has been applied to the model from linear regression. These three algorithms create the machine learning solution for a set of data.
There are two major qualities that the data must have to use this machine learning solution. First, the data must have a known relationship between variables. Usually, this means that there already exists a set of data with known values for all variables. Fitting the model involves running the algorithm against the data set. This kind of learning is called ‘supervised’ learning. Second, the data needs to be continuous within a range. Things like height, weight, square footage are examples of continuous data. Lines aren’t very good at representing data that falls into distinct categories. Machine learning jargon calls this ‘regression’ analysis.
I now can say that I successfully built my first ‘regression supervised’ machine learning algorithm. And, more importantly, understand what it means!