The final part of the basic supervised machine learning trinity is the gradient descent algorithm. Given a hypothesis and a cost function, this algorithm iterates through different values of ‘theta,’ (remember this is a parameter set), to find something called the local optima for a particular data set. Right, but that explanation is […]

# Category: Machine Learning

Notes on machine learning from Coursera class and other sources.

Time to introduce a vital concept in machine learning: parameters. Previously, I talked about ‘thetaZero’ and ‘thetaOne’ and their use in the linear hypothesis algorithm. The proper term for them is ‘parameters’ and they belong to a set ‘theta’ commonly notated as It is best to think of ‘theta’ as where ‘n” is […]

I call this a cost function rather than the cost function because one cost function does not fit all. Cost functions vary depending on the kind of data set or machine learning model. For the purposes of machine learning, cost functions become the algorithms that measure the performance of hypotheses. This equation comes […]

The very first formula I learned in machine learning (and the first time I tried writing in LaTeX!) So pretty cool, but what does it mean? This is an example of a univariate hypothesis. ‘Univariate’ is a fancy way of saying that I have one variable (let’s call it ‘x’ for now) that […]

I am realizing that machine learning likes mathematical notation. Time to figure out how to put a formula in a post. Since the 1980s, the go-to tool for publishing mathematical papers has been It has a cool history, but to write my study notes, I just needed to figure out how to use […]

Linear regression, cost function and gradient descent sum up the first two weeks of Professor Ng’s Machine Learning class on Coursera. Since I am not aspiring to be a mathematician, I am going to refer to them as algorithms. Each of these algorithms has a particular place when designing a machine learning solution. Linear regression, […]

I recently signed up for Machine Learning on Coursera (Andrew Ng, Stanford). In the first week of lectures, I already felt lost in the jargon. But nothing worth learning comes easy, so persistence, persistence, persistence. The first concept I realized that I didn’t understand involved fitting the model. When I searched the web, I found […]