The first half of the course will focus on supervised learning techniques for regression and classification. In supervised learning we have inputs and try to predict corresponding outputs. We will discuss several fundamental methods to do these tasks and algorithms for their optimization. The course is designed to help you develop a mathematical understanding of the algorithms, and then apply them using data sets provided in the assignments.
In the second half of the course, we shift to unsupervised learning techniques. In unsupervised learning, the end goal is less clear-cut than predicting an output based on a corresponding input. We will cover three fundamental problems of unsupervised learning: data clustering, matrix factorization, and sequential models for order-dependent data. Some applications of these models include recommendation systems and topic modeling.
The course follow the book "Machine Learning and Pattern Recognition", C. Bishop. Therefore, it cover an statistical approach of Machine Learning.
1 - Regression
2 - Ridge Regression
3 - Bayesian Methods
4 - Foundational Classification Algorithms I
- kNN
- Perceptron
5 - Foundational Classification Algorithms II
- Logistic Regression
- Kernel Methods and Gaussian Processes
6 - Indermediate Classifiers I
- SVM
- Trees: Bagging, Random Forest
7 - Indermediate Classifiers II
- Boosting
- k-Means Clustering
8 - Clustering Methods
- Expectation - Maximization
- Gaussian-Mixture Models
9 Recommendation System
- Colaborative Filtering
- Topic Modelling