I worked through the Stanford machine-learning class taught by Andrew Ng. It's really great that Stanford is doing this. I hope more universities follow suit.
Here are my notes. I didn't write up every topic, just those where I got around to it.
Notes
- Linear regression
- Gradient descent
- Neural networks
- Support vector machines
- K-means
- Practical advice for applying machine learning
I also put together a couple cheat-sheets. These aren't comprehensive, but are somewhat personalized covering the intersection of stuff used in the class and stuff I tend to forget.
Other topics that were covered included: Principle component analysis, anomaly detection, recommender systems, online learning, stochastic gradient descent and map-reduce.
You can't learn everything in a couple months. There are a few topics I wish had been covered by the course: Bayesian networks, naive Bayes classification, random forests, bagging and boosting, and deep learning.
A consistent theme in the class was the efficiency to be gained by vectorizing, especially in combination with functional programming idioms.
Stanford is offering a bunch of great classes this winter in nifty topics like natural language processing and probabalistic graphical models. Daphne Koller, who teaches the PGM class, wrote up an inspiring piece in the NY Times about these courses.
No comments:
Post a Comment