Putting pieces of Machine Learning (ML) together is not easy. It involves knowledge from multiple disciplines. Let’s try to approach the topics more systematically. Some people may avoid math in ML. However, you will not go too far. Fortunately, many are high school math. The real issue is whether it is introduced properly. So I will not shy from it but I will give you enough background for it.
ML algorithms cover areas including information theory, probability, data distribution, Bayesian inference, matrix decomposition, and statistics. We will cover them in this fundamental article first.
Machine Learning Summary (Fundamental)
“One learning algorithm” hypothesizes that human intelligence may be due to one learning algorithm. While Deep Learning…
The second article covers the basic ML algorithms including linear regression, classification, kernels, and Bayesian linear regression. You may know some of the topics already. So we will be brief for those well-understood areas. But we will spend more time in Gaussian Process and SVM since the concept is relatively complex. We will also detail many different approaches in the clustering and decision tree. I will also cover basic ML concepts including the cost function, regularization, and model optimization (MLE, MAP). We will also cover some deep learning topics including normalization, LSTM, and energy-based models like RBM.
Machine Learning Summary -Algorithm
In the last article, we look into the fundamental of ML and some ML algorithms. In this article, we dig deeper into…
These two articles involve many proves. But in order not to overwhelm you with details. It will separate it into a different article.
You may overwhelm by math in many research papers. Probability distribution, calculus, linear algebra, statistics, and optimization are the major math that we need to learn in ML. The last two are huge topics so I will cover them whenever it is needed only. If you have problems in understanding the linear algebra so far in the previous articles, the article below can refresh your memory. Pay special attention to areas related to matrix factorization, it will be used heavily in the dimensional reduction in ML.
Machine Learning & Linear Algebra
Linear algebra is essential in Machine Learning (ML) and Deep Learning (DL). Even we struggle in creating precise…
Eigenvector may look simple but it is a powerful concept and play a major role in ML. Google ranks page based on a similar concept. Here we will dedicate an article on it.
Machine Learning & Linear Algebra — Eigenvalue and eigenvector
Eigenvalue and eigenvector are probably one of the most important concepts in linear algebra. It is like bumping into a…
If you are irritated in hearing “the matrix is positive semidefinite and therefore …”, you may take a look here on some of the properties for some special matrics constantly used in ML.
Machine Learning & Linear Algebra — Special Matrices
In literature, we often deal with statements like “since the matrix is symmetric, it has orthonormal eigenvectors”. It…
SVD and PCA are a core topic in ML. Imagine that we can use linear algebra to untangle data to discover the structure of the data. Instead of processing raw data, we start learning knowledge.
Machine Learning — Singular Value Decomposition (SVD) & Principal Component Analysis (PCA)
In machine learning (ML), one of the most important linear algebra concepts is the singular value decomposition (SVD)…
Many AI problems are rephrased as an optimization problem. People may already familiar with conventional methods like gradient descent. We will cover more advanced optimization methods here. The first one is the Lagrange multiplier in optimizing problems under constraints.
Machine Learning — Lagrange multiplier & Dual decomposition
Optimization is a critical step in ML. In this Machine Learning series, we will look into two specific optimization…
Many machine learning (ML) problems deal with a dilemma: If we know latent factor A, we can determine latent factor B. Or, if we know B, we can determine A. Either way solves the ML problem. Unfortunately, we don’t know A or B. But in ML, it can be solved by one powerful algorithm called Expectation-Maximization Algorithm (EM).
Machine Learning — Expectation-Maximization Algorithm (EM)
The chicken and egg problem is a major headache for many entrepreneurs. Many machine learning ML problems deal with a…
The Graphical model (GM) is a branch of ML which uses a graph to represent a domain problem. Many ML algorithms can be generalized and discussed as GM. Studying it allows us a bird’s eye view on many ML algorithms.
Machine Learning — Graphical Model
One major difference between Machine Learning (ML) and Deep Learning (DL) is the amount of domain knowledge sought to…
Inference is about making predictions from data that we know. Here, we find an exact inference when GM is known.
Machine Learning — Graphical Model Exact inference (Variable elimination, belief propagation…
In the previous article, we have learned how to represent a domain problem with a Graphical model. By discovering…
But finding an exact solution is usually hard. Here, we will approximate it with linear programming and dual decomposition.
Machine Learning — MAP inference with Linear Programming & Dual decomposition
Previously, we have discussed max-product inference in the Viterbi algorithm to make an exact MAP inference (Maximum a…
Next, we switch our gear to find the solution through re-establish p(x). This can be done through sampling with methods like Monte Carlo Sampling, importance sampling, Metropolis algorithm and Gibbs sampling that are commonly used in ML.
Machine Learning — Sampling-based Inference
Google DeepMind’s AlphaGo beats the GO champions by smart sampling. Human solves problems with sampling all the time…
In ML, there are two major approximation approaches. They are sampling and variational inference. In the next article, we model the posterior with variational inference with a method like Mean Field Variational Inference.
Machine Learning — Variational Inference
Bayes’ Theorem looks naively simple. We may model the prior and likelihood with known families of distribution easily…
Finally, we will conclude our GM with a discussion on how to learn the model.
Machine Learning — Learning Graphical Model
We have learned how to model a problem and make exact and approximated inferences with a Graphical model. It is time to…
Finally, we will look into some advanced ML algorithms.