I am currently taking Thompson River’s Data Structures and Algorithms course. To help me with my understanding, I am also doing Algorithmic Toolbox course on Coursera simultaneously. Short summary of this Coursera course: Insight into different algorithm design paradigm (something which I am weak at), for example, Greedy Algorithms, Dynamic Programming, Divide and Conquer, etc.

For my own reference:
Given $X \in R^{p}$ as the real-valued random input vector and $Y \in R^{p}$ is a real valued quantitative output, our objective is to seek a function f(x) for which we can predict the output based on the input.

From Page 11 of Elements of Statistical Learning
The linear model is given by
$\hat{Y} = \hat{\beta_0} + \sum_{j = 1}^{p}X_j \hat{\beta}_j$
and if we include the intercept (known as bias in machine learning) into the matrix of input vectors (X) (as a constant variable 1), then the linear model can be written in the vector form of inner product :

In classification problems, linear discriminant analysis and logistic regression are methods of establishing linear decision boundaries to delineate data into classes. Separating hyperplanes could also be constructed to classify data and when they are linearly separable, we have a hard margin support vector machine.

Powered by OpenArchitex©2020. Code licensed under an MIT License.

Published with Wowchemy — the free, open source website builder that empowers creators.