Time to introduce a vital concept in machine learning: parameters. Previously, I talked about ‘thetaZero’ and ‘thetaOne’ and their use in the linear hypothesis algorithm. The proper term for them is ‘parameters’ and they belong to a set ‘theta’ commonly notated as It is best to think of ‘theta’ as where ‘n” is […]

# Tag: machine learning

I call this a cost function rather than the cost function because one cost function does not fit all. Cost functions vary depending on the kind of data set or machine learning model. For the purposes of machine learning, cost functions become the algorithms that measure the performance of hypotheses. This equation comes […]

Linear regression, cost function and gradient descent sum up the first two weeks of Professor Ng’s Machine Learning class on Coursera. Since I am not aspiring to be a mathematician, I am going to refer to them as algorithms. Each of these algorithms has a particular place when designing a machine learning solution. Linear regression, […]

I recently signed up for Machine Learning on Coursera (Andrew Ng, Stanford). In the first week of lectures, I already felt lost in the jargon. But nothing worth learning comes easy, so persistence, persistence, persistence. The first concept I realized that I didn’t understand involved fitting the model. When I searched the web, I found […]