There are a few ways to categorize a machine learning algorithm. Here I will give you a brief introduction of these categories in easy words.

Statistical Learning

There are some debates on whether Machine Learning is just Statistics. Here’s a really good comparison of the two by Brendan O’Connor. And here is a good discussion on StackExchange. My standing is that some machine learning approaches are not statistical, even though they can be explained by statistics. But do not be puzzled when you see statistics everywhere in a machine learning textbook.

Supervised and Unsupervised Learning

Depending on the data we have and what our goal is, we need to decide whether we will use supervised or unsupervised learning.

In a supervised learning, we will have a set of data (e.g., set S) that is correctly labeled. Then we can train algorithms using S or a subset of S that predict labels from the input variables. At the end, we typically choose the algorithm that minimize the difference between its predicted labels and the true labels.

Example: We have data of the heights of 100 people, and we have each of these heights link to a label which corresponds to the age range of that person (i.e., “up to 2-year-old”, “2 to 6”, “6 to 12”, “12 to 16”, “16 and older”). Then in a supervised learning, we train algorithms that use the input variable (i.e., the height) to predict the label (i.e., the age range corresponding to that height). We check the efficiency of the algorithms by comparing its predictions with the true labels.

In an unsupervised learning, we do not have correctly labeled data (or if we have we do not use). So we cannot train algorithms base on their “correctness” (the correctness in the supervised learning is subject to the final purpose of the machine learning problem, which I will write about in future posts). Instead, we try to find meaningful structures and relationships from the data. (Unsupervised learning is also known in Statistics as exploratory data analysis.)

Example: We have the height data as above but without the correct labels given. What we can do is to use unsupervised learning to group the heights into different groups and interpret them.

Parametric and Nonparametric Methods

“In statistics, a parametric model is a family of distributions that can be described using a finite number of parameters.” (wikipedia) Thus, a parametric method in machine learning assumes the distribution of the data takes a specific form. Or more precisely, a parametric method assumes the underlying real classifier or regressor takes a specific form.

On the contrary, a nonparametric method does not have such assumption. As a result, nonparametric methods are more flexible but generally harder to interpret.

Generative Model and Discriminative Model

A algorithm using the generative model has the ability to generate data similar to the training data, because it learns about the joint probability of the input variables and labels.

On the other hand, a algorithm using the discriminative model only learns about the conditional probability, i.e., given the input variable, what is the label going to be.

Here’s a discussion from StackExchange with some interesting examples.

 

These categories could easily overlap depends on the specific algorithm.

Advertisements

One thought on “Categories of Machine Learning Algorithms

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s