It’s an old saying that practice makes perfect. A wise soul who realized that iterations yield improvement might have concluded that a lot of improvements are perceived as perfection and ended up with that adage. Why is it that despite the truth is out there, not everyone can grab this fruit? Do they not desire it? Do they not want it that bad? Or is it the case that not everyone deserves their “golden apple”?
I do not have a ready-to-eat answer. I have aimed to be extraordinary for as long as my memory serves me. And that desire is still burning like the Sun. Yet, the number of endeavors that resulted in success doesn’t please my heart nor does the extent of success. A few sticking points from my approach pattern is obvious: I crave to become the best, in the shortest time possible, with an outcome being guaranteed while also staying in my comfort zone or changing my identity drastically. …
No seriously. A person who aims high, and wants to build a working science project with molten lava erupting perfectly like liquid chocolate oozing out of marbled brownie as their first summer school project with clay, sand, and fuming sulphuric acid is what an average perfectionist looks like. Or maybe finish the mathematics assignment optimally in the shortest way possible or not complete their math homework because they would either be at the top or not run the race at all. There is a general sense of reaching the rooftop and going beyond, even though the agent might be unaware of the room, floor, and surroundings. In my case, with ambitions of getting a six-pack, I will either have a diet hitting the exact macronutrients with 140g of protein, 260g of carbs, and 65g of fats or eat 2 pounds of mozzarella cheese, 17 baby gorillas and dip myself in a chocolate river. Well not exactly, but that’s how it feels in my mind. It’s either a perfect day of eating chicken, broccoli, oats, and pasta or a terrible junk day where I ate a couple of Oreos or a few slices of pizza. …
What is linear regression? It is a predictive modeling technique that finds a relationship between independent variable(s) and dependent variable(s) (which is a continuous variable). The independent variable(iv)’s can be categorical(e.g. US, UK, 0/1) or continuous(1729, 3.141 etc), while dependent variable(dv)s are continuous. Underlying function mapping iv’s and dv’s can be linear, quadratic, polynomial or other non-linear functions(like sigmoid function in logistic regression), but this article is on linear technique.
Regression techniques are heavily used in making real estate price prediction, financial forecasting, predicting traffic arrival time (ETA).
Brief: Gaussian mixture models is a popular unsupervised learning algorithm. The GMM approach is similar to K-Means clustering algorithm, but is more robust and therefore useful due to sophistication. In this article, I will be giving a birds-eye view, mathematics(bayesic maths, nothing abnormal), python implementation from scratch and also using sklearn library.
It would be a good idea to check out my blog on K-Means clustering(3 min read) to get a basic idea of clustering, unsupervised learning and the K-means technique. In clustering, given an unlabelled dataset X, we wish to group the samples into K clusters. In GMMs, it is assumed that different sub-populations(K in total) of X follow a normal distribution, although we only have information about the probability distribution of the overall population X(hence the name Gaussian Mixture Model). …
Brief: Gaussian mixture models is a popular unsupervised learning algorithm. The GMM approach is similar to K-Means clustering algorithm, but is more robust and therefore useful due to sophistication. In this article, I will be giving a birds-eye view, mathematics(bayesic maths, nothing abnormal), python implementation from scratch and also using sklearn library.
It would be a good idea to check out my blog on K-Means clustering(3 min read) to get a basic idea of clustering, unsupervised learning and the K-means technique. In clustering, given an unlabelled dataset X, we wish to group the samples into K clusters. In GMMs, it is assumed that different sub-populations(K in total) of X follow a normal distribution, although we only have information about the probability distribution of the overall population X(hence the name Gaussian Mixture Model). …
Brief: Gaussian mixture models is a popular unsupervised learning algorithm. The GMM approach is similar to K-Means clustering algorithm, but is more robust and therefore useful due to sophistication. In this article, I will be giving a birds-eye view, mathematics(bayesic maths, nothing abnormal), python implementation from scratch and also using sklearn library.
It would be a good idea to check out my blog on K-Means clustering(3 min read) to get a basic idea of clustering, unsupervised learning and the K-means technique. In clustering, given an unlabelled dataset X, we wish to group the samples into K clusters. In GMMs, it is assumed that different sub-populations(K in total) of X follow a normal distribution, although we only have information about the probability distribution of the overall population X(hence the name Gaussian Mixture Model). …
Brief: Gaussian mixture models is a popular unsupervised learning algorithm. The GMM approach is similar to K-Means clustering algorithm, but is more robust and therefore useful due to sophistication. In this article, I will be giving a birds-eye view, mathematics(bayesic maths, nothing abnormal), python implementation from scratch and also using sklearn library.
It would be a good idea to check out my blog on K-Means clustering(3 min read) to get a basic idea of clustering, unsupervised learning and the K-means technique. In clustering, given an unlabelled dataset X, we wish to group the samples into K clusters. In GMMs, it is assumed that different sub-populations(K in total) of X follow a normal distribution, although we only have information about the probability distribution of the overall population X(hence the name Gaussian Mixture Model). …
Brief: K-means clustering is an unsupervised learning method. In this post, I introduce the idea of unsupervised learning and why it is useful. Then I talk about K-means clustering: mathematical formulation of the problem, python implementation from scratch and also using machine learning libraries.
Typically, machine learning models make prediction on data, learning previously unseen patterns to make important business decisions. When the data set consists of labels along with data points, it is known as supervised learning, with spam detection, speech recognition, handwriting recognition being some of its use cases. …
Brief: K-means clustering is an unsupervised learning method. In this post, I introduce the idea of unsupervised learning and why it is useful. Then I talk about K-means clustering: mathematical formulation of the problem, python implementation from scratch and also using machine learning libraries.
Typically, machine learning models make a prediction on data, learning previously unseen patterns to make important business decisions. When the data set consists of labels along with data points, it is known as supervised learning, with spam detection, speech recognition, handwriting recognition being some of its use cases. …
Brief: K-means clustering is an unsupervised learning method. In this post, I introduce the idea of unsupervised learning and why it is useful. Then I talk about K-means clustering: mathematical formulation of the problem, python implementation from scratch and also using machine learning libraries.
Typically, machine learning models make prediction on data, learning previously unseen patterns to make important business decisions. When the data set consists of labels along with data points, it is known as supervised learning, with spam detection, speech recognition, handwriting recognition being some of its use cases. …